Group Verbal Decision Analysis: Theory and Applications 3031169409, 9783031169403

This book describes an original approach to solving tasks of individual and collective choice: classification, ranking,

215 26 3MB

English Pages 252 [253] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Author
1 Basic Concepts of Decision Theory
1.1 Decision and Choice
1.2 Decision Making Process
1.3 Decision Making Task
1.4 Decision Maker Preferences
1.5 Evaluation of Options
1.6 Comparison of Options
1.7 Choice of Options
References
2 Individual and Collective Decisions
2.1 Rationality and Optimality
2.2 Individual Optimal Choice
2.3 Individual Rational Choice
2.4 Verbal Decision Analysis
2.5 Aggregation of Individual Preferences
2.6 Collective Choice
2.7 Group Multicriteria Choice
References
3 Group Ordering of Multi-attribute Objects
3.1 Representation and Comparison of Multi-attribute Objects
3.2 Demonstrative Example: Multi-attribute Objects
3.3 Group Ordering of Objects by Multicriteria Pairwise Comparisons: Method RAMPA
3.4 Demonstrative Example: Method RAMPA
3.5 Group Ordering of Objects by Proximity to Reference Points: Method ARAMIS
3.6 Demonstrative Example: Method ARAMIS
References
4 Group Classification of Multi-attribute Objects
4.1 Group, without Teachers, Classification of Objects by Feature Proximity: Methods CLAVA-HI and CLAVA-NI
4.2 Demonstrative Example: Method CLAVA-HI
4.3 Group, with Teachers, Classification of Objects by Aggregated Decision Rules: Method MASKA
4.4 Demonstrative Example: Method MASKA
References
5 Reducing Dimensionality of Attribute Space
5.1 Hierarchical Structuring Criteria and Attributes: Method HISCRA
5.2 Demonstrative Example: Method HISCRA
5.3 Hierarchical Structuring Criteria and Attributes: Modified Method HISCRA-M
5.4 Demonstrative Example: Method HISCRA-M
5.5 Shortening Criteria and Attributes: Method SOCRATES
5.6 Demonstrative Example: Method SOCRATES
References
6 Multicriteria Choice in Attribute Space of High Dimensionality
6.1 Progressive Aggregation of Classified States: Technology PAKS
6.2 Demonstrative Example: Technology PAKS
6.3 Progressive Aggregation of Classified Situations with Many Methods: Technology PAKS-M
6.4 Demonstrative Example: Technology PAKS-M
References
7 Practical Applications of Choice Methods
7.1 Analysis of Science Policy Options
7.2 Evaluation of Topicality and Priority of Scientific Directions and Problems
7.3 Formation of Scientific and Technological Program
7.4 Project Competition in Scientific Foundation
References
8 Practical Applications of Choice Technologies
8.1 Assessment of Research Results
8.2 Selection of Prospective Computing Complex
8.3 Evaluation of Organization Activity Effectiveness
References
9 Mathematical Tools
9.1 Concept of Multiset
9.2 Operations on Multisets
9.3 Families of Sets and Multisets
9.4 Graphical Representations of Multisets
9.5 Set Measure and Multiset Measure
9.6 Metric Spaces of Multisets
References
Conclusion
Recommend Papers

Group Verbal Decision Analysis: Theory and Applications
 3031169409, 9783031169403

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Studies in Systems, Decision and Control 451

Alexey B. Petrovsky

Group Verbal Decision Analysis Theory and Applications

Studies in Systems, Decision and Control Volume 451

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland

The series “Studies in Systems, Decision and Control” (SSDC) covers both new developments and advances, as well as the state of the art, in the various areas of broadly perceived systems, decision making and control–quickly, up to date and with a high quality. The intent is to cover the theory, applications, and perspectives on the state of the art and future developments relevant to systems, decision making, control, complex processes and related areas, as embedded in the fields of engineering, computer science, physics, economics, social and life sciences, as well as the paradigms and methodologies behind them. The series contains monographs, textbooks, lecture notes and edited volumes in systems, decision making and control spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the worldwide distribution and exposure which enable both a wide and rapid dissemination of research output. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

Alexey B. Petrovsky

Group Verbal Decision Analysis Theory and Applications

Alexey B. Petrovsky Federal Research Center “Computer Science and Control” Russian Academy of Sciences Moscow, Russia V. G. Shukhov Belgorod State Technological University Belgorod, Russia Volgograd State Technical University Volgograd, Russia

ISSN 2198-4182 ISSN 2198-4190 (electronic) Studies in Systems, Decision and Control ISBN 978-3-031-16940-3 ISBN 978-3-031-16941-0 (eBook) https://doi.org/10.1007/978-3-031-16941-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

Decision making is a special kind of human activity, consisting in a reasonable choice of the best, in a certain sense, option or several options from the available possibilities. In everyday life and professional activities, we often say “to make a well-grounded choice,” “to take a right decision.” Each person and groups of people are faced with the need to make important and not important decisions. But not everybody thinks about how to make the best choice, how to get the most benefit from the implemented decision and minimize possible negative consequences. Often, both a single person and a team act inconsistently, make recoverable and irreparable mistakes, choose without analyzing all possible options, for what they may regret later. Quality of management in different areas dictates a necessity of special analytical training for preparing and making decisions. A modern person, and a decision maker in particular, should make a decision not-intuitively, but using the appropriate tools to find the best option and justify the choice made. At the same time, ill-considered and poorly substantiated decisions are still not so rare, especially when they are related to new, unparalleled choice situations. Theory of choice and decision making as an independent scientific discipline began to form in the middle of the twentieth century within a methodology of system analysis. However, the very first studies on voting as a tool for collective choice appeared at the end of the seventeenth century. Decision theory aims to create methods and tools, which help one person or a group of people to formulate possible options for a problem solution, compare them with each other, find the best or acceptable options that satisfy certain requirements (conditions, restrictions, preferences), and, if necessary, explain the choice made. Decision theory can be useful in the analysis and solution of complex problems, but only when its methodological and mathematical tools are applied “correctly,” according to their capabilities, without overstating or downplaying their role during searching for a solution. The monograph presents the main methodological directions in modern theory of choice and decisions. We describe the most famous methods of optimal and rational individual choice, methods of collective choice. In many different techniques of decision making, one uses the so-called quantitative approach based on a numerical v

vi

Preface

measurement of indicators. However, despite the seeming simplicity and obviousness, the quantitative approach is not at all suitable for working with qualitative data. We introduce original methods of group decision analysis, methods for reducing dimensionality of qualitative attribute space and new multistage technologies to solve the ill-structured multicriteria choice tasks of high dimensionality. These tools allow us to take into account judgments, including contradictory, of all members of the decision making group without a compromise between individual opinions. We consider examples of solving practical problems of multicriteria choice using new tools. The brief description of multiset theory is given, which is necessary for a better understanding of the content. The book focuses on methods and technologies of group verbal decision analysis (GVDA)—an original direction in theory of multicriteria choice, which is begun in the USSR, and developing in Russia and other countries. The principal feature of verbal decision analysis (VDA), which distinguishes it from other known directions in decision making, is the use of natural language, close to the professional activity, to describe a problem situation and objects under consideration, formalize knowledge of expert and preferences of decision maker. Properties of options and classes of decisions are specified with qualitative indicators and criteria that have verbal formulations of grades on the rating scales. That is why the direction got such name. At all stages of the analysis and solution of a choice problem, numerical indicators of objects, importance of criteria, values of options are not calculated or applied, and verbal data are not converted into numerical ones. Typically, a small number (three to five) of attribute grades are introduced to ensure a clear distinguishability of estimates. Even with a small number of scale grades, it is possible to describe rather complicated features of objects. Thus, using only qualitative measurements, the relations of superiority and equivalence of the compared alternatives are given on a set of multicriteria assessments. This allows us to classify multi-attribute objects, order multi-attribute objects and select the best or acceptable one. The obtained final results and decision rules are explained in the language of verbal attributes and estimates upon criteria familiar for persons. Verbal decision analysis is characterized by an active participation of decision maker and/or expert in the analysis and solution of a problem, making it possible to express judgments in a versatile and sufficiently detailed manner, clarify and correct them in the process of solving a problem, to generate and justify new options. Group verbal decision analysis extends the choice methodology to collective decisions. Within a group choice, preferences of several decision makers and knowledge of several experts may not coincide, and decision options may exist in several copies with different values of quantitative and/or qualitative attributes. GVDA methods work with the contradictory information and provide an acceptable choice. The mathematical apparatus of GVDA is theory of multisets. The use of multisets to represent multi-attribute objects allows us to consider simultaneously various combinations of numerical and symbolic variables, taking into account their inconsistency and polysemy. With a help of methods and technologies of GVDA, one can solve all types of tasks of individual and collective multicriteria selection, ordering and classification

Preface

vii

of multi-attribute objects or decision options. There are opportunities to take into account jointly all, including conflicting, opinions of participants without averaging and reconciling them; process heterogeneous (numerical and verbal) data; form integral numerical and non-numerical indicators that aggregate the initial characteristics of objects; give explanations of intermediate and final results in natural language. In general, verbal methods are more “transparent,” less laborious for a person, weakly sensitive to measurement errors. The book has the following structure. Chapter 1 introduces the basic concepts of decision theory, formulates the task of decision making and gives a classification of choice problems. We identify the notion of a decision maker preference and show ways to evaluate, compare and choose decision options. Chapter 2 describes features of individual and collective decisions. We consider the main groups of methods for optimal and rational individual choice, such as multicriteria optimization, approximation of the Pareto boundary, heuristic methods, methods of utility theory, analytical hierarchy, outranking relation, choice functions, computing with words, verbal decision analysis. We specify procedures and models for aggregating individual preferences. We also present methods for collective choice, among them voting procedures, methods of group multicriteria decision making. Chapters 3 and 4 include original techniques of group verbal analysis, which allow ordering of objects by multicriteria pairwise comparisons; ordering of objects by their proximity to reference points; classifying objects without teachers by the features proximity; classifying objects with teachers by the aggregated decision rules. We propose ways for representation and comparison of multi-attribute objects. We demonstrate examples how to solve model tasks of collective choice of multi-attribute objects using the developed methods. Chapter 5 suggests a new approach to reducing the dimensionality of an attribute space, which is considered as a solution to the task of verbal multicriteria classification. We specify new methods for reducing the dimensionality of a space where multi-attribute objects are represented as vectors/tuples or multisets of their numerical and/or verbal characteristics. Demonstrative examples of methods’ applications are also given. Chapter 6 describes multistage and multimethod technologies for multicriteria choice in a high-dimensional attribute space. These technologies allow us to solve all types of problems of individual and collective choice of objects, given by a large number of quantitative and/or qualitative indicators. We present demonstrative examples of solving model tasks of group choice. Chapters 7 and 8 contain examples of practical applications of the developed methods and technologies for group verbal decision analysis. These include analysis of science policy options; evaluation of topicality and priority of scientific directions and problems; formation of scientific and technological program; project competition in scientific foundation; assessment of research results; selection of prospective computing complex; evaluation of organization activity effectiveness. Chapter 9 provides a brief description of multiset theory, which serves as a mathematical tool for group verbal decision analysis. We discuss the concept of multiset;

viii

Preface

define operations on multisets, families of sets and multisets, set measure and multiset measure, metric spaces of multisets. The book adopted a unified numbering system. The number consists of two numbers, the first of which corresponds to a chapter; the second one is the ordinal number of section, formula, table, figure. This book is intended for researchers, managers, decision-making consultants, analysts and developers. The book will also be interesting and useful for teachers, postgraduate and undergraduate students of applied mathematics, computer science, cybernetics, economics, engineering, information processing and management. When writing this book, the author consciously paid more attention to the substantive aspects of presentation, avoiding details of the mathematical tools, strict formulations and theorem proofs that can be found in other scientific publications. The given examples are close to the practical decision problems, demonstrate and explain the theoretical considerations. Descriptions of experience of applying new methods and technologies to solve real, ill-structured choice tasks take a significant place in this book. The author is grateful to the Russian Foundation for Basic Research for multi-year support of his studies on group verbal decision analysis. And finally, the author sincerely appreciates the family, friends, colleagues and many other persons who facilitated the preparation and publication of this monograph and other books. Moscow, Russia December 2021

Alexey B. Petrovsky

Contents

1 Basic Concepts of Decision Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Decision and Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Decision Making Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Decision Making Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Decision Maker Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Evaluation of Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Comparison of Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Choice of Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 3 6 9 12 15 17 21

2 Individual and Collective Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Rationality and Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Individual Optimal Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Individual Rational Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Verbal Decision Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Aggregation of Individual Preferences . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Collective Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Group Multicriteria Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23 23 26 29 33 37 40 44 47

3 Group Ordering of Multi-attribute Objects . . . . . . . . . . . . . . . . . . . . . . . 3.1 Representation and Comparison of Multi-attribute Objects . . . . . . . 3.2 Demonstrative Example: Multi-attribute Objects . . . . . . . . . . . . . . . . 3.3 Group Ordering of Objects by Multicriteria Pairwise Comparisons: Method RAMPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Demonstrative Example: Method RAMPA . . . . . . . . . . . . . . . . . . . . . 3.5 Group Ordering of Objects by Proximity to Reference Points: Method ARAMIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Demonstrative Example: Method ARAMIS . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51 51 56 61 63 70 73 76

ix

x

Contents

4 Group Classification of Multi-attribute Objects . . . . . . . . . . . . . . . . . . . 79 4.1 Group, without Teachers, Classification of Objects by Feature Proximity: Methods CLAVA-HI and CLAVA-NI . . . . . . . . . . . . . . . . 79 4.2 Demonstrative Example: Method CLAVA-HI . . . . . . . . . . . . . . . . . . . 85 4.3 Group, with Teachers, Classification of Objects by Aggregated Decision Rules: Method MASKA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.4 Demonstrative Example: Method MASKA . . . . . . . . . . . . . . . . . . . . . 101 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5 Reducing Dimensionality of Attribute Space . . . . . . . . . . . . . . . . . . . . . . 5.1 Hierarchical Structuring Criteria and Attributes: Method HISCRA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Demonstrative Example: Method HISCRA . . . . . . . . . . . . . . . . . . . . . 5.3 Hierarchical Structuring Criteria and Attributes: Modified Method HISCRA-M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Demonstrative Example: Method HISCRA-M . . . . . . . . . . . . . . . . . . 5.5 Shortening Criteria and Attributes: Method SOCRATES . . . . . . . . . 5.6 Demonstrative Example: Method SOCRATES . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

115

6 Multicriteria Choice in Attribute Space of High Dimensionality . . . . 6.1 Progressive Aggregation of Classified States: Technology PAKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Demonstrative Example: Technology PAKS . . . . . . . . . . . . . . . . . . . . 6.3 Progressive Aggregation of Classified Situations with Many Methods: Technology PAKS-M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Demonstrative Example: Technology PAKS-M . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

141

7 Practical Applications of Choice Methods . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Analysis of Science Policy Options . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Evaluation of Topicality and Priority of Scientific Directions and Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Formation of Scientific and Technological Program . . . . . . . . . . . . . 7.4 Project Competition in Scientific Foundation . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

167 167

8 Practical Applications of Choice Technologies . . . . . . . . . . . . . . . . . . . . . 8.1 Assessment of Research Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Selection of Prospective Computing Complex . . . . . . . . . . . . . . . . . . 8.3 Evaluation of Organization Activity Effectiveness . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

193 193 199 207 210

9 Mathematical Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Concept of Multiset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Operations on Multisets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Families of Sets and Multisets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

213 213 218 226

115 120 123 125 130 133 139

141 145 148 152 164

172 177 184 190

Contents

9.4 Graphical Representations of Multisets . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Set Measure and Multiset Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6 Metric Spaces of Multisets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi

229 233 236 240

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243

About the Author

Alexey B. Petrovsky is Professor, Doctor of Technical Sciences on System analysis and automated control. He is Head of Department for decision problem, Chief Researcher, Federal Research Center “Computer Sciences and Control,” the Russian Academy of Sciences, Moscow, Russia. He is Invited Professor at Moscow Physics and Technical Institute—National Research University (2004–2015), N. E. Bauman Moscow State Technical University (2006–2015), M. V. Lomonosov Moscow State University (2007–2010), Belgorod State National Research University (2010– 2019), V. G. Shukhov Belgorod State Technological University (2011–2022), Volgograd State Technical University (2017–2022), Russia. He graduated from M. V. Lomonosov Moscow State University (1967); and obtained Ph.D. degree on Theoretical and Mathematical Physics from V. A. Steklov Mathematical Institute, the USSR Academy of Sciences (1970). He is Editor-in-Chief and Member of Editorial Council, “Artificial Intelligence and Decision Making,” journal of the Russian Academy of Sciences; Member of Editorial Boards: International Journal of Decision Support Systems, International Journal “Information Models and Analysis,” International Journal “Information Technologies and Knowledge,” “Proceedings of the Institute for Systems Analysis of the Russian Academy of Sciences,” “Strategic Decisions and Risk-Management”; a member of Editorial Council, “Automation of Control Processes.” He is Member of the International Society on Multiple Criteria Decision Making, the European Working Group “Multiple Criteria Decision Aiding”; the Commission for working with young researches of the Russian Academy of Sciences, the Russian Association for Artificial Intelligence; Full Member of the Russian Academy of Natural Sciences. He is the author of over 200 papers, including 7 monographs, 2 textbooks. His research areas are discrete mathematics, multiset theory, multicriteria decision making, verbal decision analysis, decision support systems, information technologies, systems analysis, science and technology policy, R&D forecasting, planning and management.

xiii

Chapter 1

Basic Concepts of Decision Theory

This chapter introduces the basic concepts of decision making theory. Methods and tools of choice allow generating a set of possible options for the problem solution, finding among them the best or acceptable option(s), explaining the choice made. We discuss the features of decision and choice, present stages of solving the choice problem. Here we formulate the task of decision making, and give a classification of choice tasks. We identify the notion of a decision maker preference and describe ways to evaluate, compare and choose decision options.

1.1 Decision and Choice The choice itself is one of the most common actions in human life. Decision making is a special kind of people activity, which consists in a reasonable choice of any option or options the best, in some sense, from available ones. In everyday life, we constantly have to make certain decisions, choosing goods bought in stores, dishes ordered in a cafe or restaurant, routes and transport modes for travels, and the like. Due to the repeatability, typicality of choice situations, a person makes a decision, almost without thinking, often intuitively, by habit or analogy. The best among the options considered is usually found without any special analysis. In more complex and, accordingly, more rarer, unique situations, for example, when choosing a place of rest, study or work, buying an apartment or an expensive thing, voting for a candidate or party, a person more carefully approaches his choice. Before making a decision, he/she tries to examine in detail, evaluate and compare various options, to take into account their particular characteristics. More complex tasks of choice arise in the professional activities of a politician, economist, financier, commander, scientist, constructor, doctor. This list of professions is easy to continue. When solving political, economic, managerial, production, military tasks, it is required to take into account the different and often not coinciding interests of the parties involved, it is needed to search and analyze various © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. B. Petrovsky, Group Verbal Decision Analysis, Studies in Systems, Decision and Control 451, https://doi.org/10.1007/978-3-031-16941-0_1

1

2

1 Basic Concepts of Decision Theory

data. To compare the options, it is necessary to conduct a comprehensive, sometimes sufficiently multi-aspect analysis of the problem situation, build special models for this, involve specialists, experts, consultants, analysts in the construction of solution options, develop and use computer decision support systems. Similar problems arise for persons managing complex technical system (power plant, energy object, technological process, transportation system, airplane, ship, and the like). But here situations are complicated by the fact that decisions need to be taken promptly, in real time, practically without the possibility to analyze in detail all alternatives and emerging consequences of their implementation. Without exaggeration, we can say that the need to do a reasonable choice is present in all areas of activity. When making difficult decisions, there is always a lack of information. Some of the necessary information is often missing, and the available information may be insufficient and/or contradictory. An experienced manager or specialist covers an incompleteness of information with their knowledge, skills, experience and intuition. Making the right decisions in difficult situations is a kind of art that only a few persons possess [1]. However, in modern conditions, only one art is not enough in decision making. More than ever, the dynamism of life has grown, and the period of time, during which previous decisions have remained correct, has been reduced. Complexities of the considered solutions, their interdependence and interconnection have enhanced. Possible risks and uncertainty of consequences, the scale and size of losses, that may arise when making insufficiently founded decisions, have substantially increased. As a result, a personal responsibility for making the “most correct” decision has raised. And difficulties to find such solution, which cannot be overcome without using methods and tools of decision theory, have grown. The study of how a person makes decisions, and the creation of choice methods are the subject of many scientific disciplines that have arisen and historically developed independently of each other. These include decision theory, operation research, game theory, optimal control theory, informatics, artificial intelligence, economic cybernetics, organization theory, cognitive psychology, behavior theory and others. These disciplines analyze from different positions the mechanisms, processes and rules of choice in relation to objects of various natures and in different conditions of their existence. Together, they form a multidisciplinary scientific area that helps people make reasonable choices. Formal decision making methods may be useful in the following environment. • There is some problem or problem situation that needs to be solved. Often, the desired result of solving a problem is identified with one or more goals that must be achieved when resolving a problem. • There are several different options to solve the problem, ways to achieve the goal, objects to be considered, among which the choice is made. These options in decision theory are often called alternatives. If there is only one possible option and there is no choice, then there is no decision making task. • There are factors that impose certain restrictions on possible ways to solve the problem, achieve the goal. These factors depend on the context of the problem

1.2 Decision Making Process

3

being solved and may have a different nature: physical, technical, economic, social, or other. • There is a person or a group of persons who are interested in solving a problem, have the authority to choose a solution option, and are responsible for implementing the decision. In practical choice, opposing opinions about the role of formalized methods coexist. Some people who do not have professional knowledge of mathematical methods often believe that any problem can be formally translated into the language of mathematics and then solved by its tools. Others completely reject this possibility. The reality is much more complicated than these extreme statements, and the truth, as always, lies somewhere closer to the middle. At the same time, discussing the practical applicability of decision making methods, it should be especially emphasized that there must be both objective external circumstances and subjective internal reasons, which would encourage persons— authorities responsible for solving the problem, specialists, analysts—to look for the best decision. Without such a need, the demand for science-based methods of founded choice will be small.

1.2 Decision Making Process Solving the choice problem and obtaining the final result are determined by actions of many participants whose functions differ. Decision maker (DM) or actor is a person or group of persons who actually choose the preferred decision. Typically, this is the head or a group of competent specialists with relevant knowledge and experience, endowed with the necessary powers and responsibility for the decision implementation. Owner of the problem (PO) is a single person or group of persons who have reasons and motives for posing a problem, who are aware of the need to solve it, and initiate the adoption and implementation of a decision. The problem owner and the decision maker can be the same person, but they can be different people. Active group (AG) is a formal or informal association of people who have common interests related to a problem that should be solved, and who strive to influence the decision process in order to achieve the result they need. Usually the problem owner belongs to one of the main active groups. The interests of different active groups can both coincide and differ from each other and from the interests of decision maker. Expert (E) (from the Latin expertus—experienced) is a competent specialist who is professionally versed in the problem being solved, has the necessary information about the problem and its individual aspects, but is not responsible for the decision made and implemented.

4

1 Basic Concepts of Decision Theory

Consultant (C) is a competent specialist who assists to the decision maker and the problem owner in organizing the process of solving the problem, formally formulating the choice task, provides the collection of necessary information, develop the problem model, procedures and methods of decision making. Decision making processes in different fields of activity have much in common. The life cycle of solving a problem consists of several stages and represents a multistep iterative procedure (see Fig. 1.1) [2]. The need for decision making arises, when a problem situation occurs (step 0). In this case, we identify the problem (steps 1–3), that is, we describe the problem content, specify the desired result of solving the problem, determine the existing restrictions. At the next stage, we formulate the choice task (steps 4–7). For this, it is necessary to give the possible options (alternatives). To fully describe the options, we usually have to collect and analyze various data, related to the problem, and ways to solve it. An absence or inability to obtain the necessary information can make the problem unsolvable. In such cases, we have to return to the original formulation of problem and modify the formulation. A similar need may also arise at previous stages of the decision process. In difficult situations of choice, it may be necessary to develop a special (usually mathematical) model of the problem situation in order to get a simplified solution to the problem with its help. The second stage ends with the statement of the choice task. Note that a detailed meaningful description of the problem being solved already at the first stage can immediately lead to a statement of the choice task, bypassing all or many of the subsequent stages. Having formulated the choice task, we proceed to the search for the task solution (steps 8–10). This stage includes the selection of some method of solving the task from the already known ones or development of a new method; the solution itself, during which various options are evaluated and analyzed, the preferable or acceptable options are selected. Such procedures are often quite difficult and labor-intensive, require the involvement of knowledge and skills of many people and capabilities of computers. However, even after accomplishing all stages of the problem-solving process, it is not always possible to make the final choice. There are situations when it is hard to find a suitable solution. The desired option may simply not be available. Then we should either modify the formulation of the original problem (step 11), or return to the previous stages and collect the necessary additional information, change the formal statement of the choice task or model of the problem situation, expand or narrow the number of alternatives considered, construct new options. If we found an acceptable option, the stage of decision execution begins (steps 12, 13). At this stage we realize the decision, check the process of implementation and evaluate the results of resolving the problem situation. Strictly speaking, this stage does not relate to the decision making procedure. But the inclusion of the execution stage in the general scheme is important from methodological and practical points of view, since this stage closes the life cycle of the process of occurrence, resolution and disappearance of the problem situation. And, in addition, the decision implementation can create a new problem that requires finding its own solution.

1.2 Decision Making Process

5

Stages of solving the problem Occurrence of the problem situation

Participants 0

PO, AG

Description of the problem content

1

PO, DM, C

Specification of the desired result

2

DM, PO

Determination of restrictions

3

DM, C, E

Determination of possible options

4

DM, C, E

Collection and analysis of information

5

C, E

Formation of the problem situation model

6

C, E

Statement of the choice task

7

DM, C

Choice/development of a solution method

8

C

Evaluation and analysis of decision options

9

DM, C, E

Selection of the preferred option

10

DM

Identification of the problem

Formulation of the choice task

Search for the task solution

Modification of the task formulation

11

DM, PO, C

Decision implementation Realization and control of the decision

12

PO, DM, AG

Evaluation of the problem solution

13

PO, AG, E

Fig. 1.1 Life cycle of solving the problem

6

1 Basic Concepts of Decision Theory

1.3 Decision Making Task Decision making task, the most generally, consists of forming a set of possible options that provide a solution to the problem situation under the existing restrictions, and choosing one or several more preferable options that satisfy the given requirements. Formally, the decision making task D is written as follows [2]: D = . Here F is the task statement, which includes a description of the problem content, model representation of the problem, determination of the achieved goal, requirements for the type of final result. O are possible options (alternatives), from which the choice is made: really existing options (objects, candidates, actions, ways to achieve the goal, and the like); hypothetically possible options, the number of which can be finite and infinite. K are characteristics (attributes, parameters, features) of options, which describe their distinctive peculiarities: objective and, as a rule, measurable indicators that characterize the option properties; subjective assessments according to defined or specially constructed criteria reflecting the option properties that are essential for participants in the choice. For example, a person state of health can be characterized by body temperature, blood pressure, the absence or presence of pain, and localization of pain. G are conditions that restrict the range of feasible options for solving the problem, which are described meaningfully or are given as the formal requirements for options and/or their attributes. These may be restrictions on the values of any attributes, or a different degree of an attribute severity for certain options, or the inability or need to combine simultaneously some values of attributes for real-life options. So, if a person is healthy, then nothing hurts him, and the temperature and blood pressure are normal. P are preferences of one or several decision makers, based on which they evaluate and compare possible solutions to the problem, select the acceptable options, search for the best or reasonable option. Often, to simplify the formulation of a decision making task, part of information describing DM preferences is converted into restrictions. Factors, characterizing the problem situation and affecting the formal statement of a decision making task and the means to solve it, are conditionally divided into two groups. Controllable factors, the choice of which depends on decision makers, describe the goals set, options (alternatives) for their achievement, subjective assessments of options and the degree of goals’ achievement. Uncontrollable factors, which does not depend on decision makers, reflect the objective properties of options and partly set restrictions on the choice of possible options. Factors can be also divided into certain or deterministic factors δ with known and/or predetermined precise characteristics; random or stochastic factors ξ with known and/or predetermined probabilistic characteristics; vague factors μ with

1.3 Decision Making Task

7

known fuzzy characteristics and area of their change; uncertain or unknown factors ζ with completely or partially unknown characteristics, but sometimes with a known area of their change. Uncertain factors are usually resulted from uncertainty of nature, that is, factors unknown to people or independent of them; uncertainty of person whose behavior can be inconsistent, contradictory, dependent on other persons, whose actions can be mistaken, not fully taken into account or foreseen; uncertainty of goals, which may vary and not coincide. In order to solve the task with uncertainty, an uncertainty must be restricted or reduced. For this, a content analysis of the problem situation is carried out, some additional assumptions are made, and simplifications are introduced in the task formulation. We give a classification of decision making tasks according to various aspects of their consideration. By the regularity of the problem situation to be solved, we indicate new, unique tasks that have never arisen before, and repeated tasks that differ slightly from each other and are often encountered in practice. By the type of final result, we specify the following choice tasks that are considered as typical: • reduce the initial set of options (alternatives) and select one or more preferable (usually the best) options; • order all options, usually from the best to the worst; • distribute all options into groups (classes) that differ in properties, and these groups can be either ordered or not ordered by some quality. Options of a task solution vary in: • quantity—a few (units, tens), many (hundreds and thousands), infinitely many; • presence in the solution process—options specified in advance when formulating the problem; options constructed during the process; options that appear after the end of the process; • degree of mutual dependence—independent options, manipulations with which do not affect other options; dependent options with different types of dependency between them. By the number of persons authorized to make a decision, we distinguish: • individual decision (there is only a single decision maker); • collective or group decision (there are several decision makers who have coinciding or conflicting interests, pursue their own goals and act independently of each other); • organizational decision (there are several decision makers who strive to achieve a common goal, may have different interests, but depend on each other and are forced to coordinate their actions). Such a group of decision makers is called a “decision making body”. By the role of a decision maker in the decision process, we vary tasks, in which:

8

1 Basic Concepts of Decision Theory

• choice is made without a DM participation on the basis of axiomatically or heuristically defined procedures; • DM takes part only at the final stage of the choice process; • DM directly participates at the main stages of the choice process. Depending on the way in which DM preferences are presented, there are tasks of the single-criterion (holistic) choice and the multicriteria choice (with independent and dependent criteria). Decision making tasks can also be classified according to the features of the information used, which differs in: • type—quantitative (numerical), qualitative (symbolic, verbal), mixed; • nature—objective, obtained by measurements and/or calculations, subjective, received from a person (DM, expert); • dependence on time—static and dynamic; • level of certainty—deterministic, probabilistic, fuzzy, uncertain. Degree of the problem structurization is the concept introduced by Simon and Newell [3], which affects the choice of method for solving the problem and is determined by a different combination of quantitative and qualitative, objective and subjective information that describes the problem. Well-structured or well-formalized problems are, as a rule, repeatable in nature and are usually studied in operations research. They are described by quantitative characteristics. The most significant dependencies can be formalized by objective models and presented in symbolic form, where symbols have numerical values. The optimal (from the Latin optimum—the best) option is given by the extrema of quantitative criteria or effectiveness indicators. Generally speaking, a decision maker is almost not involved in building the model and finding the optimal solution. Ill-structured or poorly formalized problems are, as a rule, an unique nature and are usually considered in decision theory. They combine quantitative and qualitative characteristics and dependencies, and the insufficiently known and uncertain aspects of the problem (the so-called non-factors) prevail. Obtaining the necessary objective information is absent or difficult, because of which it is impossible to build a fully formalized model of the problem situation. To build a high-quality verbal model, information is needed from decision makers, experts, and analysts. Many criteria are used to evaluate possible solutions to the problem. There is no “objective” way to select the best option by extremization of some criterion or criteria of optimality. The choice of the best option is based on subjective preferences of a decision maker with his/her active participation. Unstructured or non-formalized problems have only qualitative, verbal descriptions based on subjective judgments of a person. Quantitative relationships between the most important characteristics of the problem are absent or unknown. It is impossible to build any formalized model of the problem situation.

1.4 Decision Maker Preferences

9

1.4 Decision Maker Preferences In decision theory, it is assumed that a decision maker evaluates and compares the options under consideration (alternatives, objects, actions) and makes a targeted choice of the best or acceptable option (options) based on his/her subjective preferences. There is no strict definition of the concept of preference. A preference will be called a personal judgment of a human, expressed in some way, about the presence or absence of advantage of one of the options on the whole or according to some individual characteristics with respect to another option or to all other options [2, 4]. Subjectivity of preference does not mean, however, that a decision maker can act as he/she pleases. In real choice situations, a person usually behaves quite reasonably, his/her actions obey to a certain internal principle of rationality (from the Latin ratio—a reason). In these cases, of course, different people can have their own value systems, various ideas about preferability, in accordance with which they make their subjectively best choice. Therefore, it cannot be argued that the most preferred solution is the only best, since different DMs may consider different options to be the best. Decision maker can express his/her preferences directly, explicitly, and indirectly, implicitly. Explicitly expressed preferences can be described and fixed in some language, justified and/or explained. In some cases, DM preferences are given by special selection rules that have a logical-mathematical or verbal formulation. The explicit indication of DM preferences greatly facilitates the process of solving the choice problem. If DM preferences are not clearly expressed, it can be difficult to explain the results of comparison and choice of solution options. Serious difficulties in identifying and considering preferences may arise when there are several DMs, each of which has its own value system, personal interests and different sources of information. At the same time, even with clearly and precisely expressed preferences, a decision maker can make mistakes and contradictions in his/her assessments, be inconsistent in judgments, especially when it is necessary to consider and compare many alternative options. These features of a person should be taken into account when developing decision making methods, providing for special procedures that allow finding such inaccuracies in DM judgments and correcting them. Model of a preference formalizes the concept of option significance for a decision maker. In the functional model, the considered option significance on the whole or by individual properties is determined by one or more numerical functions, depending on option characteristics. Such functions are called goal, value, utility function, effectiveness indicator, and so on. In the production model, the considered option significance on the whole or by individual properties is determined by one or more decision rules that connect a name and characteristics of option. In the relational model, the compared options’ significance on the whole or by individual properties is determined by one or more binary relations.

10

1 Basic Concepts of Decision Theory

The neutral or indefinite preference of the options Oi ≈ O j is characterized by a symmetric relationship (similarity, equivalence; dissimilarity, non-similarity; incomparability) and indicates some equivalence or indefinite value of both options for a DM, for example: “The options Oi and O j are similar”, “The options Oi and O j are not comparable”. The non-strict or weak preference of the options Oi . O j is established by an antisymmetric or reflexive and complete relation (non-strict superiority, non-strict order, pre-order) and reflects both the distinguishability and the sameness of the options for a DM: “The option Oi is not worse than the option O j ”, “The option Oi is at least the same as the option Oi ”. The strict or strong preference of the options Oi > O j is given by an asymmetric relation (strict superiority, strict order) and is interpreted as a clearly expressed difference between the options: “The option Oi is definitely better than the option O j ”. With neutral preference (Oi ≈ O j ), both options Oi and O j are selected from the pair of options. With weak preference (Oi . O j ), either the option Oi is chosen, or both options are chosen together. With strict preference (Oi > O j ), only the first option Oi is selected and the second option O j is not selected. In order to identify and use DM preferences in the decision making process, it is necessary to have information about them, for which it is necessary to describe and/or measure preferences. A description is a non-formalized way of expressing preferences, as opposed to a measurement that characterizes the option feature in numerical or symbolic form. Measurement of feature an object or phenomenon is carried out using a scale, which is a set of numbers or symbols. Depending on the specifics of the measured characteristics, quantitative (numerical) and qualitative (symbolic, verbal) scales are distinguished, as well as discrete and continuous. The following types of scales are most common. The nominal scale or scale of names establishes the equivalence relation between objects that have the same feature. It is used to indicate that the object belongs to a certain class. The ordinal or rank scale establishes the relation of objects’ order according to a degree of the severity of any feature. It does not have a certain scope and a fixed reference point. It is used to indicate a difference in objects without specifying how many or how many times one object is superior to another. The interval scale establishes the relation of objects’ order according to a magnitude of the distinction of any feature. It has a certain number scope and an arbitrary reference point. It is used to measure how much the object Oi exceeds the object O j by the difference di j = xi − x j of numerical estimates xi and x j of objects on the scale. The scale of differences has the unit number scope and an arbitrary reference point. The scale of relations establishes the relation of objects’ order according to a magnitude of the distinction of any feature. It has a certain number scope and the zero reference point. It is used to measure how many times the object Oi exceeds

1.4 Decision Maker Preferences

11

the object O j by the ratio h i j = xi /x j of numerical estimates xi and x j of objects on the scale. An absolute or natural scale establishes the relation of objects’ order. It has the unit number scope and the zero reference point. It is used to measure a number of objects. In addition to the scales mentioned above, which are invariant under linear transformations, there are other scales that are invariant under non-linear transformations, for example, power scale, exponential scale, and logarithmic scale. When formulating a decision-making task, features of the considered options (alternatives, objects) are often described with criteria. A criterion (from the Greek κριτηριoν—measure, means of judgment) is some distinguished peculiarity that characterizes an object or phenomenon. To measure a severity of this peculiarity, a certain semantic scale X of criterion K is introduced, and one of the values x i ∈ X on this criterion scale is assigned to each option Oi , i = 1, . . . , m : Oi ⇔ xi . The value xi = K (Oi ) is called an estimate of the option Oi upon the criterion K. In other words, the criterion determines a mapping K: O → X of the collection O = {O1 ,…,Om } of options to the set X of values of the characteristic K. Scales of quantitative and qualitative criteria are distinguished in nature. The criterion scale may also be natural or artificial. In order to be considered as criterial, the scale must have a clearly defined meaning: which gradations of grades are “the best”, which are “the worst”, and which are “equivalent”. Usually this meaning is established by a decision maker. Thus, the criterion combines a scale for measuring some feature of the option and DM preference, which can be written as K = {X, P}. Collection of criteria used to evaluate the problem situation should satisfy the following requirements: • completeness—a criteria collection should reflect all essential aspects of the problem under consideration, quality of the problem solution and the main peculiarities of options; a set of estimate grades on each criterion scale should comprehensively characterize the corresponding feature of options; • decomposability—a criteria collection should simplify a description and analysis of the problem, allow evaluating various characteristics of options and different aspects of quality of the problem solution; • non-redundancy—the number of criteria should be the minimum necessary to solve the problem; criteria should not duplicate each other in content; • transparency—the content and meaning of criteria, formulations of the estimate grades on criteria scales should be clearly understood by all participants in the decision making process: a decision maker, problem owner, members of active groups, experts. All possible ways to identify DM preferences consist of three main procedures: evaluation, comparison and choice of options. Note that these procedures can be both objective and subjective. A decision maker can establish a preferability of options on the whole and for individual characteristics. A decision maker expresses his/her preferences at different stages of the choice process: setting criteria to evaluate objects or

12

1 Basic Concepts of Decision Theory

phenomena, their features, forming rating scales of criteria, evaluating and comparing options, determining rules to choose a preferred option.

1.5 Evaluation of Options It is most simple and relatively easy to identify DM preferences by evaluating the decision options. The evaluation of a characteristic is a measurement of its value on a scale of some criterion. A multicriteria description of the problem allows us to evaluate and compare options by individual aspects instead of their holistic consideration, expands possibilities for interpreting the results obtained. In many situations, this is a more convenient approach for a person to search for an acceptable solution and explain the choice made, especially for ill-structured problems. Evaluation of options on the whole implies that, under conditions certainty, each option Oi , i = 1, . . . , m corresponds to its point mark xi = K (Oi ), which is a number or symbol from the set of values X on the scale of any single criterion K. The semantic content of the criterion depends on the specific context of the problem being solved. Under conditions of probabilistic uncertainty, the option Oi corresponds to the probability distribution over a given numerical interval. Under conditions of fuzzy uncertainty, the option Oi corresponds to the membership function over a given numerical interval. Under conditions of complete uncertainty, the option Oi [ ] corresponds to the interval xi' , xi'' of possible estimate values. Evaluation of options upon many criteria is due to the heterogeneity of option characteristics and the variety of goals achieved when solving the problem. Thus, multicriteriality can play methodically different roles. Firstly, options (alternatives) with many features can be described by indicators (criteria) K 1 , K 2 , …, K n . Then, an n-dimensional vector or tuple x i = (xi1 , . . . , xin ), x i ∈ X = X 1 ×· · ·× X n is associated with each option Oi , i = 1, . . . , m : Oi ⇔ x i . The component xil = K l (Oi ) is the numerical or verbal characteristic of the option Oi , represented by the gradation of rating scale X l of the criterion K l , l = 1, . . . , n. The set X a ⊆ X of all vectors/tuples of estimates is called the set of feasible attribute values, set of feasible options, feasibility set. Secondly, the problem solution can be considered as the achievement of many goals that are given by goal functions, optimality criteria, effectiveness indicators f 1 (x), . . . , f d (x), which are numerical functions of scalar x or vector x = (x1 , . . . , xn ). Then a d-dimensional vector yi = (yi1 , . . . , yid ) = f (xi ), yi ∈ Y = Y1 × · · · × Yd = Rd is associated with each option Oi : Oi ⇔ yi . Components yik = f k (xi ), k = 1, . . . , d of the vector yi are estimates upon particular criteria f 1 , …, f d . The set Y a = f (X a ) ⊆ Y , that corresponds to the set X a of feasible options, is called the set of the solution quality estimates, set of achievable goals, achievability set. Consider, as an example, two possible approaches to evaluating trucks upon many criteria [2]. First, we evaluate trucks by their properties, for which we choose the design characteristics of truck, such as carrying capacity, maximum speed, power,

1.5 Evaluation of Options

13

x2

f2(x) Xa

f (xi)

xi 0

Ya

f (X a) f1(x)

0

x1

a

yi

b

Fig. 1.2 Set of options a in the space X of attributes, b in the space Y = f (X) of goal functions

type and location of engine, body type, fuel consumption and others. In the case of two criteria, a trucks can be represented by a point x on the plane (x 1 , x 2 ), where, for example, the axes are the engine power x 1 and the fuel consumption x 2 (Fig. 1.2a). The best estimates are considered to be a greater engine power on the x 1 axis, and lower fuel consumption on the x 2 axis. X a is the set of feasible options, inside which there is a set of trucks. Evaluate now trucks for their performance. The effectiveness indicators (goals functions) can be operation costs, total mileage without repairs, payback periods, operation time and others. Each effectiveness indicator is a function of the design characteristics of truck. In the case of two criteria, a truck can also be represented by a point y on the plane (f 1 , f 2 ), where, for example, the axes are the operation costs f 1 (x 1 , x 2 ) and the total mileage f 2 (x 1 , x 2 ) (Fig. 1.2b). The best estimates are considered to be a lower operation cost on the f 1 axis, and greater mileage on the f 2 axis. Each truck corresponds to a point on the plane (x 1 , x 2 ) and a point on the plane (f 1 , f 2 ). The set Y a = f (X a ) of achievable goals corresponds to the set X a of feasible options. Thus, there are possible diverse ways of describing the collection O = {O1 , …, Om } of choice options using many criteria, which in turn are the following mappings: K: O → X = X 1 × · · · × X n , f : O → Y = Y 1 × · · · × Y d , Kf : O → X 1 × · · · × X n → Y 1 × · · · × Y d . Note that the fundamental distinction between the above kinds of multicriteriality is not always taken into account. The variety of option characteristics and methods for achieving goals is often considered simply as many criteria for assessing a problem situation. When measuring characteristics those describe the options, and then processing the measurement results, a comparability of heterogeneous properties is of great importance, since the perception of manifold data is associated with certain difficulties. Therefore, it is advisable to convert this information in some ways, presenting data in a more convenient form. One of the widespread means for normalizing numerical assessments is to average them over a values set using the formulas of arithmetic average, geometric average, and statistical average: xi = (1/N )

∑ N j=1

xi j , xi =

(Π N j=1

)1/N xi j

;

14

1 Basic Concepts of Decision Theory

)1/2 (∑ N ∑ N ' ' xik = xik / xi j , xik = xik / xi2j , j=1 j=1 ] (∑ N )1/2 [ ∑ N ' xik = xik − (1/N ) xi j / xi2j . j=1

j=1

Here N is the total number of the solution options’ assessments. Quantitative characteristics, such as size, duration, speed, power, cost, and others are measured by numbers. As a rule, numerical attribute scales have different dimensionality: M, S, Km/H, kW, Euro, and various “range” from the minimal to maximal magnitude. If the rating scales X l of numerical criteria K l have different units of measurement, then estimates upon the criteria can be made dimensionless, for example, as follows: ( ( ) ) ( ) xl'' = xl /xlmax , xl'' = xl / xlmax − xlmin , xl'' = xl − xlmin / xlmax − xlmin , where x l max and x l min are the maximum and minimum estimates upon the criterion K l , which determine the “range” of scale. Qualitative characteristics, such as significance, safety, comfort, and the like, are described by words (verbally) using linguistic scales of attributes, grades of which have their own semantic content. Often non-numerical criteria K l have rating scales X l , for instance, with the following five gradations of estimates: xl 1 xl 2 xl 3 xl 4 xl 5

excellent (very high, very big); good (high, big); satisfactory (middle); bad (low, small); very bad (very low, very small).

One also uses rating scales with four gradations (excellent, good, satisfactory, bad) or with seven gradations (superior, excellent, good, satisfactory, middle, bad, very bad). Quite often, verbal scales are “digitized” by assigning the corresponding numerical values to ordinal gradations: either integer, for example, 1, 3, 5, 7, 9 or 5, 4, 3, 2, 1, or fractional, ranging from 0 to 1 or from 0 to 100. However, such a transformation of non-numerical data into a numerical form can significantly distort the preferences of decision maker. Sometimes it is more convenient to convert a continuous scale into a discrete one, for example, a point scale, dividing the set of marks into several subsets and setting the value xlmin equal to 0 or 1 point, and the value xlmax equal to 10 or 100 points. The resulting scale will be the interval scale. More rarely, the continuous scale is replaced by a discrete scale, associating xlmin with the worst mark on the ordinal scale of criterion K l , and xlmax with the best mark.

1.6 Comparison of Options

15

1.6 Comparison of Options Comparison of options is carried out in a choice situation, when it is not possible to exhaustively describe all the features of each individual option. Such a way of expressing DM preferences is based on specifying, respectively, binary relations of equality, non-strict order and strict order on the set X a of admissible values of attributes or on the set Y a of accessible goals. Comparison of options on the whole is equivalently to their comparison according to any feature expressed with a single criterion K, which can be quantitative or equivalent for a decision maker if qualitative. The options Oi and O j are considered ( ) their estimates xi = K (Oi ) and x j = K O j are the same on the scale X of criterion K: Oi ≈ O j ⇔ x i = X x j , ( ) or if the values yi = f (xi ) and y j = f x j of effectiveness indicator are equal on the set Y a , Y a = f (X a ) ⊆ Y = R of accessible goals: ( ) Oi ≈ O j ⇔ f (xi ) =Y f x j . The option Oi is considered preferable to the option O j if estimates of the option Oi upon criterion K is not worse or better than estimates of the option O j Oi . O j ⇔ x i . X x j ,

Oi > O j ⇔ x i > X x j ,

or if the value of effectiveness indicator for the option Oi is not less or greater than the value for the option O j : ( ) Oi . O j ⇔ f (xi ) ≥Y f x j ,

( ) Oi > O j ⇔ f (xi ) >Y f x j .

Here x i and x j can be either scalar or vector variables. It is assumed that gradations on the rating scale X of criterion K and values of the function f are ordered, for example, from the worst to the best. With other ordering of estimates on the criterion scale and values of the function, the option preference must be replaced by the opposite. Comparison of options upon many features, which are described by many criteria K 1 , …, K n with rating scales X 1 , …, X n . The equivalence of the options Oi and O j for a decision maker is determined( by the equality ) of corresponding vectors or tuples x i = (xi1 , . . . , xin ) and x j = x j1 , . . . , x jn of estimates on the set X a ⊆ X 1 × · · · × X n of admissible values of attributes: ( ) Oi ≈ O j ⇔ (xi1 , . . . , xin ) = X x j1 , . . . , x jn . The equality x i = X x j of vectors/tuples is satisfied when all components of the same name are equal: xil = x jl , xil , x jl ∈ X l , l = 1, . . . , n.

16

1 Basic Concepts of Decision Theory

The option preference for a decision maker according to many features can be given by various binary relations. The option Oi is considered preferable to the option O j with respect to the dominance relation on the set X a ⊆ X of admissible values of attributes if the estimate vector/tuple x i = (x)i1 , . . . , xin ) of the option Oi dominates ( the estimate vector/tuple x j = x j1 , . . . , x jn of the option O j : ( ) Oi . O j ⇔ (xi1 , . . . , xin ) ≥ X x j1 , . . . , x jn , or x i = (xi1 , . . . , xin ) strictly dominates the vector/tuple x j = ) ( the vector/tuple x j1 , . . . , x jn ( ) Oi > O j ⇔ (xi1 , . . . , xin ) > X x j1 , . . . , x jn . The first relation, which is also called the Pareto relation, holds if xil ≥ x jl and xi p > x j p for at least one number p, xil , x jl , xi p , x j p ∈ X l , l, p = 1, . . . , n. The second relation holds if x il > x jl for all x il , x jl ∈ X l , l = 1, …, n. The set of all dominant options will be denoted by X # . The options included in the set X # are not comparable with each other by their properties. Obviously, X # ⊆ X a . The option Oi is considered preferable to the option O j with respect to the lexicographic order on the set X a ⊆ X of admissible values of attributes: ( ) Oi > O j ⇔ (xi1 , . . . , xin )∠ X x j1 , . . . , x jn , if the components of vectors/tuples for any l satisfy the conditions: xi1 ≺ x j1 , or xi1 = x j1 , xi2 ≺ x j2 , . . . , or xi1 = x j1 , xi2 = x j2 , . . . , xi,l−1 = x j,l−1 , xil ≺ x jl . Comparison of options upon many goal functions or effectiveness indicators, which are characterized the solution quality. The options Oi and O j will be equivalent for a decision maker on the set Y a = f (X a ) ⊆ Y1 × · · · × Yd of(acces) sible goals if the vectors yi = f (xi ) = ( f 1 (xi ), . . . , f d (xi )) and y j = f x j = ( )) ( ( ) f 1 x j , . . . , f d x j of goals are equal: Oi ≈ O j ⇔ ( f 1 (xi ), . . . , f d (xi )) =Y

( ( ) ( )) f1 x j , . . . , fd x j .

The equality yi =Y yj of goal ( )vectors is fulfilled ( ) when all components of the same name are equal: f k (xi ) = f k x j , f k (xi ), f k x j ∈ Yk = R, k = 1, . . . , d. The option Oi is considered preferable to the option O j on the set Y a of accessible goals if the goal vector yi =( f )(xi ) (= (( f 1)(xi ), . . . ,( f d (x ))i )) of the option Oi dominates the goal vector y j = f x j = f 1 x j , . . . , f d x j of the option O j : Oi . O j ⇔ ( f 1 (xi ), . . . , f d (xi )) ≥Y

( ( ) ( )) f1 x j , . . . , fd x j ,

1.7 Choice of Options

17

( ) or the vector yi = f (xi ) strictly dominates the vector y j = f x j Oi > O j ⇔ ( f 1 (xi ), . . . , f d (xi )) >Y

( ( ) ( )) f1 x j , . . . , fd x j .

The option O p is called the Edgeworth–Pareto optimal, Pareto optimal, or effective yi =( f (x if there is no other option Oi ,(whose ) goal ( ( vector ) ))i ) = ( f 1 (xi ), . . . , f d (xi )) dominates the vector y p = f x p = f 1 x p , . . . , f d x p , that is, there is no option ( ) f k (xi ) ≥ f k x p is satisfied for all particular indicators and for which the( condition ) f h (xi ) > f h x j for at least one indicator, k, h = 1, …, d. Respectively, the option Os is called the Slater optimal or weakly effective if there is no other option Oi , whose goal vector yi = f (xi ) = ( f 1 (xi ), . . . , f d (xi )) strictly dominates the vector ys = f (xs ) = ( f 1 (xs ), . . . , f d (xs )), that is, there is no option for which the condition f k (xi ) > f k (xs ) is satisfied for all particular indicators, k = 1, …, d. The set of Pareto optimal options is denoted by X ∗ ⊆ X a , and the set of goal vectors for Pareto optimal options is denoted by Y ∗ = f (X ∗ ) ⊆ Y a , which is called the Pareto boundary of the accessibility set. The set Y* consists of non-dominant vectors that are not comparable. Note that the set X* of effective options, generally speaking, does not coincide with the set X # of options that are not dominated by their properties. Pairwise comparison of options is often used to directly identify preferences of a decision maker. In this case, the strict superiority Oi > O j , the weak superiority Oi . O j , the equivalence Oi ≈ O j , or the equality Oi = O j of options are evaluated. The results of comparisons form the square matrix ‘Object-Object’ B = ||bij ||m×m , the elements of which are given, for instance, by one of the following expressions: ⎧ ⎨

⎧ ⎧ ⎨ 2, if Oi > O j , ⎨ 1, if Oi > O j , 1, if Oi . O j , bi j = 1, if Oi ≈ O j , bi j = 0, if Oi ≈ O j , bi j = ⎩ 0, if Oi ≺ O j ; ⎩ ⎩ 0, if Oi ≺ O j ; −1, if Oi ≺ O j . Elements bij of the matrix B can take on other values. Preference of options for a decision maker is determined by values of row sums ∑ bi = j bi j of the matrix B elements, which characterize the quality of options. To identify preferences in the presence of many criteria and/or many decision makers, it is necessary to build separate matrices of pairwise comparisons for each of criteria and for each of decision makers.

1.7 Choice of Options Choice of options comes down to one of three typical tasks: (1) selection of one or more preferred options, (2) strict or non-strict ordering of options, and (3) distribution of options by classes. There are situations when a decision maker can directly indicate the options (objects, alternatives), that satisfy him/her, based on implicit internal

18

1 Basic Concepts of Decision Theory

sensations. In these cases, a DM makes his/her choice intuitively, not trying to explain the motives and reasons for such a choice. Usually it is not required to argue the choice made. In more complicated cases, choosing the desired option, a decision maker can be guided by a variety of strategies. Consider peculiarities of formalizing DM preferences in the context of the typical choice problems. Selection of preferable options is, in fact, a reduction of the initial collection of available or possible options, based on different ways of comparing them (as a rule, according to the characteristics of options). The decision rule looks like this: IF , THEN . Here, the term represents requirements that the selected options must satisfy, for example, the values of attributes or functions that describe the options, or the type of relationship between the options. The term contains names of the selected options. If it is possible to specify a single indicator of the quality (effectiveness) of the solution, then the best choice for a decision maker is the option O ∗ ⇔ x ∗ ∈ arg extra f (x), x∈X

that have the attribute values x ∈ X a , at which the indicator y = f (x) reaches its extremum. The meaningful interpretation of the extremum y ∗ = f (x ∗ ) depends on the context of the problem being solved. Such a choice is called optimal or extremizational, the chosen option is the optimal solution, and the quality indicator is called the optimality criterion. The rule for choosing the best options in terms of quality can be formally written as a decision rule, where the condition is the accessibility of the extremum. In the presence of many different quality indicators f 1 (x), . . . , f d (x), the multicriteria optimization problem x ∗ ∈ arg extra f k (x), k = 1, . . . , d arises, in which x∈X

different procedures are used to find the optimal solution. The main difficulties are connected with the coordination of the requirements of the functions’ extremality and the verification of the DM preferences for consistency. Ordering of options is the establishment of binary relations of the strict or nonstrict order, equivalence, or incomparability between options. Comparison of options by attributes is based on their characteristics. The final order is built either on the basis of the objective features of options, or on the basis of the subjective preferences of the decision maker, or on a combination of both. Ordering of options often comes down to their ranking, which is carried out according to the values r i of the option ranks. The ordering O1 > O2 > · · · > Om of the options corresponds to the ordering of their ranks r1 < r2 < · · · < rm . The resulting ranking of options can be strict and non-strict. In the latter case, the ranking contains equivalent options with equal, so called linked ranks, which are usually equal to their arithmetic mean value. The option rank can be defined in various ways, for example, by the formula

1.7 Choice of Options

19

ri = m + 1 −

m ∑ ) ( r Oi , O j , j=1

) ) ( ( where r Oi , O j = 1 if Oi > O j , Oi ≈ O j , and r Oi , O j = 0 if Oi ≺ O j . Grate difficulties may arise in a construction of the final ordering of objects that have many features and should be considered and analyzed on the whole, for example, when several decision makers evaluate objects. With increasing the number of options, criteria, and experts evaluating options, the number of possible comparisons of options raises dramatically. Due to the limited human capabilities in processing information, ranking methods for multi-attribute objects are quite time-consuming. Classification of options is the most difficult and complicated task of choice. The concept of “class” is defined as a collection of objects with common properties, which are described by attributes that have numerical, symbolic and/or verbal values. A class can be, for example, objects that have the desired combination of attributes; objects whose attributes lie within certain limits of values; objects closest in the attribute space. The objects included in the same class are considered indistinguishable (equivalent) in quality, and all classes together should form the original collection of objects. One applies direct classification, which consists in the enumeration of objects that form the class, and indirect classification, which is based on the enumeration of the properties that characterize the class. Direct classification is carried out by directly assigning the object to one of the specified classes. The direct classification result is the distribution of all classified objects to the given classes, while the maximum possible number of classes is limited by the number of objects considered. If objects have many properties, the problem arises of finding such attribute values that are most characteristic of each class and allow us to distinguish between these classes. Indirect classification consists of combining objects that have the required values of attributes or their combinations in the corresponding class. The theoretically possible number of classes is determined by cardinality of the direct (Cartesian) product of the sets of attribute values. When the number of attributes and/or their values is large enough, the number of potential classes can significantly exceed the number of real objects. In this case, the main problem is to find which combinations of attributes and their values allow us to form the required number of classes, which should differ from each other in quality and contain a sufficient number of objects. The procedure for classifying objects is defined by a set or sequence of decision rules, which are represented by the expression: IF , THEN . Here, the term specifies requirements that the selected objects must satisfy; the term denotes a name of the given or generated class to which the object should belong when the required conditions are fulfilled. For direct classification, the term includes names of the selected objects. For indirect

20

1 Basic Concepts of Decision Theory

classification, one or more of the terms are constructed as relationships between different attributes and/or their values that describe objects of the class. When there are a single decision maker and a fairly small number of classified options and their characteristics, the family of decision rules is easily visible and available for analysis. The more options considered and the more various decision rules for classification, the more difficult the analysis of these rules. When classifying options, a DM can make inaccuracies and mistakes, his/her assessments can be non-transitive. Therefore, special procedures should be provided for identifying and eliminating contradictions in the judgments of a single decision maker. When several decision makers classify options, an inconsistency of individual decision rules is possible due to the ambiguity of understanding by different people of the problem being solved, the subjective difference in the preferences and knowledge of the decision makers themselves, and many other reasons. As a result, individual decision rules may appear, among which there will be the same, similar, different and contradictory rules. All these peculiarities should not be eliminated, but taken into account when constructing a generalized decision rule for the option classification. Classification methods can be divided into the following categories: classification without a teacher or clustering, classification with a teacher; nominal and ordinal classifications. In clustering methods, objects are combined into groups (clusters) based on the degree of their proximity, which is formally determined by the distance between objects in the attribute space. The number of clusters formed can be arbitrary or fixed. In the classification methods with a teacher, it is required to find a general rule for including an object in one of the given classes. This rule is based on previously obtained information about the class membership of some objects. The methods of ordinal and nominal classification are distinguished by the presence or absence of ordering of classes according to some property or quality. In these methods, it is required to find the values of object attributes or their combinations, the most characteristic for each class. One of the relatively simple ways of direct ordinal classification of objects is to sort them by named and/or ordered classes. Sorting of objects is, in fact, a nonstrict ranking that consists of a small number of ordered groups (classes) combining equivalent objects. Collective sorting of objects is done by aggregating individual judgments of a group of persons. Here situations are also possible, in which special requirements are imposed on the consistency of opinions of individual participants. If the consistency of estimates is considered to be acceptable, then the final distribution of objects by classes can be constructed, for example, by averaging individual estimates. If the opinions of individual participants vary significantly, other methods should be used that take into account these differences. Group sorting of objects described by many qualitative attributes is one of the most difficult classification tasks. Here, it is required to find one or several fairly simple generalized rules for group classification that most closely with the individual rules of expert classification of objects, allow us to assign objects to the given classes without rejecting the possible inconsistency and even contradiction of individual estimates of objects, and identify contradictory classified objects. Difficulties are associated

References

21

mainly with the need to process a large amount of symbolic and/or verbal data, the convolution of which is either impossible or mathematically incorrect. Note that when it is possible to arrange all the considered objects or to divide them into ordered classes in some way, the best objects will take first place in the final ordering of objects or belong to the most preferred class. Thus, having solved the second or third typical task of choice, we always obtain a solution to the first typical task of choice. Depending on the context of task, this can be done in various ways. The most widespread and practically important are the tasks of individual multicriteria choice, in which the search for the final solution is based on preferences and knowledge of a single individual person (decision maker, expert). In the modern decision theory, the main attention is paid to just such tasks. Now, tasks of group choice have become more and more practical. The final solution to such tasks is based on aggregation and, as a rule, coordination of various judgments of several persons. At the same time, more and more often there is a need to solve problems, in which it is impossible or extremely difficult to agree the opinions of different participants, for example, when several experts evaluate options upon many criteria independently from each other, not knowing estimates of other participants. Therefore, we need new methods for obtaining and processing heterogeneous information that take into account all assessments, including conflicting ones, of all members of a decision making group, and do not require a compromise between opinions of individual participants.

References 1. Larichev, O.I.: Nauka i iskusstvo prinyatiya resheniy (Science and Art of Decision Making). Nauka, Moscow (1979). (in Russian) 2. Petrovsky, A.B.: Teoriya prinyatiya resheniy (Theory of Decision Making). Publishing Center “Academy”, Moscow (2009). (in Russian) 3. Simon, H., Newell, A.: Heuristic problem solving: the next advance in operations research. Oper. Res. 6(1), 1–10 (1958) 4. Roubens, M., Vincke, Ph.: Preference Modelling. Springer, Berlin (1985)

Chapter 2

Individual and Collective Decisions

This chapter describes the features of individual and collective decisions. We consider the main groups of methods for optimal and rational individual choice, such as multicriteria optimization, approximation of the Pareto boundary, heuristic methods, methods of utility theory, choice functions, analytical hierarchy, outranking relation, computing with words, verbal decision analysis. We specify procedures and models for aggregating individual preferences. We also present methods for collective choice, among them voting procedures, methods of group multicriteria decision making.

2.1 Rationality and Optimality The postulate of rationality for the individual choice of a person serves one of the cornerstones in modern decision theory. DM preferences and/or expert knowledge are key factors in rational choice. It is believed that each person should have own (real or imagined, explicit or implicit) idea of what is preferable for him/her in a specific choice situation, own “tool for measuring value” of the compared options. And a decision maker, making his/her choice, intuitively or consciously seeks to obtain own, most profitable final result. In practice, there are quite a lot of choice problems, where we need to find several options that are the most preferable for a person, and often—the only best option. For some of such problems, it is possible to construct a mathematical model of choice, where the properties of options are measured by one or many numerical criteria of the solution quality or effectiveness indicators. Although these criteria are specified by a decision maker/expert, they are usually objective in nature, are determined by a content of the problem being solved and represented by some functions depending on many variables. Then the so-called optimal option, which corresponds to the extreme values of the criteria under existing conditions, is considered to be the most preferable solution to the choice problem. Thus, the concept of rational choice includes the concept of optimality. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. B. Petrovsky, Group Verbal Decision Analysis, Studies in Systems, Decision and Control 451, https://doi.org/10.1007/978-3-031-16941-0_2

23

24

2 Individual and Collective Decisions

The choice rationality also implies its subjectivity. A person compares options and chooses the most profitable option based on his/her individual preferences. Therefore, the decision option, the best or acceptable for one person, may not be the same for the other. Which option is considered as the best and how to find it, largely depends on a meaningful interpretation of the evaluation criteria. And this, in turn, is determined by the individual’s own interests and can be indicated by him/her only. So, even the optimal (extremal) choice is based, to some extent, on the subjective preferences of a person. At the same time, when solving a problem, any person can express his/her preferences inconsistently, make mistakes in assessments and conclusions, and admit contradictions. These are due to various reasons, in particular the difficulty of the situation being analyzed, lack and/or inaccuracy of available information, deficiency of time, insufficient experience of a person, limited knowledge, inadvertent delusion, fatigue and inattention, a person tendency to overestimate or underestimate his/her evaluations, be risk or careful in judgments, and the like. When making individual decisions, the consistency of subjective preferences or knowledge of a single decision maker/expert is postulated. In individual choice methods, special procedures are used to identify and eliminate possible inconsistencies in an actor’s judgments. On the contrary, when making a group choice, it is necessary to take into account the various, including inconsistent and contradictory, interests of several decision makers/experts and not require a compromise when aggregating individual preferences. In decision theory, an analysis of possible solutions to the problem (ways to achieve the goals) and a choice one or more preferred options are carried out on the basis of a formal choice model. When constructing such a model, one needs to commensurate (and this is already art!) the adequacy and minuteness of a model with the accuracy of the required solution to the real choice problem, with the amount of information needed to find a solution, both available and obtained additionally, as well as with the features of the methods, used to solve the problem. We distinguish models of individual and collective, optimal and rational choice [1]. Model of individual optimal choice assumes a possibility of a formalized description of the problem situation and subjective preferences of a decision maker into a quantitative form. This means that options (objects, alternatives) should be described by numerical attributes, and the quality of choice should be evaluated by values of numerical functions (optimality criteria, effectiveness indicators, objective functions). It is generally accepted that the most preferred or best options are optimal options that correspond to the extremal values of functions. Specifying a list of criteria, their number, types of functional dependencies can greatly facilitate or, conversely, make it difficult finding the best option. The preferred option is not often unique. There are several different, but essentially equivalent decisions, for example, Pareto-optimal options that are incomparable with each other. To select the only “best” option, additional information on DM preferences/expert knowledge is needed. Using the information received from a decision maker/expert on various aspects of the compared options allows us to reduce the domain of feasible options or the domain

2.1 Rationality and Optimality

25

of achievable goals and, in some cases, find the best solutions to the problem. To do this, first, a subset of acceptable options is allocated from the given set of options, and then this subset is reduced based on DM preferences. Model of individual rational choice, reflecting subjective preferences and knowledge of a particular decision maker/expert, can be either informal or formalized, presented into a mathematical, logical or verbal form. An actor expresses his/her preferences/knowledge, indicating the characteristics of the analyzed problem and properties of the objects under consideration, comparing solution options, evaluating the quality of the choice made. Preferences are defined by binary relationships, functions, decision rules, restrictions, requirements, procedures for comparison of options and selection of the best ones. One of the most widespread models of rational choice is a functional model, in which the preferences of a decision maker/expert knowledge are represented by a monotonic real function of value (in case of certainty), utility (in case of probabilistic uncertainty), membership (in case of fuzzy uncertainty) given on the set of feasible options. The type of function is introduced heuristically or is determined by some system of axioms. The preferred option is the one whose value/utility/membership is higher. J. von Neumann and O. Morgenstern in their theory of economic behavior [2] introduced the axioms of rational behavior of a decision maker, which defines a numerical function that is associated with the measure of consumer goods for a person. P. Fishburn proved that if a strict weak order (asymmetric and negatively transitive relation) is given on a finite set of options, then there exists an additive real function that expresses the multidimensional value or utility of options [3]. A similar statement for an infinite number of options was proved by G. Debreu [4]. Obviously, such a function is not unique. Later it became clear that it is not always possible to build a functional model of rational choice, which quantitatively measures the option quality. In practice and in numerous experiments, it was found that behavior of people and the decisions made do not always correspond to any axioms of rationality. A well-known example is the paradox of M. Allais, according to which a person, when comparing different lotteries, often prefers more reliable options that give greater gains and/or exclude any loss, although the subjective expected utility of these options is less than the maximum possible. Therefore, other models of individual rational choice were developed that are not based on the functions of value or utility. B. Roy proposed the outranking approach to select multi-attribute options, which are compared according to the “outranking relation” [5, 6]. It is assumed that preferences of a decision maker do not change with small differences in the numerical estimates of the compared options. For each criterion, special numerical indicators— the threshold values of indifference, superiority, veto—are given that define areas of uncertainty in DM preferences, within which the nature of relationship between the options is preserved. The consistency of DM preferences is checked, namely a fulfillment of the outranking relation by a “sufficient majority” of criteria (the concordance principle) and non-fulfillment of this relation by a “insignificant minority” of

26

2 Individual and Collective Decisions

criteria (the discordance principle). A sensitivity of the results obtained to changing the thresholds of preference consistency is also analyzed. In the verbal decision analysis developed by O. I. Larichev, the rationality of an individual actor is interpreted as the transitivity and consistency of his/her judgments [7–11]. DM preferences and/or expert knowledge are identified by evaluating and comparing real or hypothetically possible options that have multicriteria verbal descriptions. The information received from a decision maker/expert is checked many times for consistency. Identified errors and contradictions are presented to an actor for analysis and elimination. Thus, using only qualitative measurements, transitive relations of subjective superiority and equivalence of options are to be given on the set of tuples of verbal assessments, with the help of which one can select the best option, order and classify options, and explain the results.

2.2 Individual Optimal Choice Task of individual optimal choice, where it is necessary to find one or more the most preferred options, is characteristic for well-structured problems and is formulated as follows. There is a collection O1 , …, Om of possible options, a number of which can be both finite and infinite. Features of each option Oi are described by a scalar attribute x i or an n-dimensional vector x i = (xi1 , . . . , xin ) of attributes, which have continuous or discrete numerical scales X l , l = 1, …, n. One or several optimality criteria, effectiveness indicators, goal functions y1 , …, yd are given. They are real functions yk = f k (x i , δ, ξ, μ, ζ) ∈ R, k = 1, . . . , d of many variables, where δ are deterministic, ξ stochastic, μ vague, ζ uncertain factors. Constraints are specified as equalities or inequalities gq (x i , δ, ξ, μ, ζ) ≤ bq , q = 1, . . . , p, where gq (x i , δ, ξ, μ, ζ) ∈ R are real functions of many variables. In the space X = X 1 × · · · × X n , constraints define the set X a ⊆ X of feasible options, which, in the criteria space Y = Y1 × · · · × Yd = Rd , corresponds to the set Y a = f (X a ) ⊆ Y of achievable goals. ( ) It is required to find the attribute vector x ∗ = x1∗ , . . . , xn∗ , which provides extremal (for instance, maximal) values of goal functions yk = f k (x i , δ, ξ, μ, ζ) → maxx∈Xa , k = 1, . . . , d , on the set X of feasible options and satisfies the ( specified) constraints gq (x i , δ, ξ, μ, ζ) ≤ bq , q = 1, . . . , p. The vector x ∗ = x1∗ , . . . , xn∗ that represents the option O* is called the optimal solution to the choice problem [1, 12, 13]. We distinguish single-criterion and multicriteria tasks by a number of goal functions. We specify static and dynamic tasks by a presence or absence of the dependence of the solution on time. In such cases, we speak about a one-stage or multi-stage optimal choice. The factors δ, ξ, μ, ζ determine the conditions of choice, which reflect awareness of a decision maker/expert: certainty (there is complete and exhaustive information about the environment state with known characteristics of deterministic factors);

2.2 Individual Optimal Choice

27

probabilistic uncertainty and risk (there is incomplete information about the environment state with known characteristics of random factors); fuzzy uncertainty (there is incomplete information about the environment state with known characteristics of vague factors); full or partial uncertainty (there is insufficient and/or untruthful information about some characteristics of the environment state). Complexity of solving the optimal choice problem largely depends on specificity of sets, constraints and functions included in a mathematical model of the problem situation. The search for the optimal solution is facilitated with continuous functions on the bounded, finite, and convex sets; it is difficult on the non-convex sets. Methods of individual optimal choice are numerous and manifold. Under conditions of certainty, the optimal solution x* is sought by methods of the calculus of variations and mathematical analysis that provide the search for the extremum of objective functions. The optimality criteria yk = f k (x) and the constraints gq = gq (x) are assumed to be differentiable functions of many variables x 1 , …, x n . The Lagrange multiplier method is widely known for finding the extremum of the scalar function y = f (x 1 , …, x n ) under the constraints gq (x 1 , …, x n ) = bq . With a large number of variables and constraints, the search for the optimal solution is carried out by mathematical programming methods. The main methods include linear, quadratic, convex, discrete programming methods and others. For many criteria, the goal functions usually reach their extremal values at different points of the set of feasible options X a . This complicates the optimal choice. Therefore, quite often, the set of effective (the Edgeworth–Pareto optimal) options or the set of weakly effective (the Slater optimal) options are taken as a solution to the multicriteria optimization problem. To reduce an uncertainty associated with the multicriteria choice, and to find the ) ( optimal option x ∗ = x1∗ , . . . , xn∗ , additional information on DM preferences that includes the exclusion and compensation procedures is used. Exclusion procedures consist in reducing the set of feasible options and/or the set of achievable goals, taking into account some additional requirements. Compensation procedures are based on the principle of fair concession or compromise, in which a decrease in the decision quality (loss) by some particular criterion should be compensated by an increase in the decision quality (gain) by another particular criterion. For example, when choosing a truck, we can establish the comparative importance of particular quality criteria (price, carrying capacity, engine power, and so on), according to which a number of trucks considered is gradually reduced. Or we can compensate a high operation cost by a large total mileage without repairs. A variety of approaches are used to search for the Pareto boundary Y ∗ = f (X ∗ ) and the effective (the Pareto-optimal) options X ∗ that form the boundary. The optimization methods are quite popular, in which a multicriteria task is reduced to a single-criterion task as follows: • specification of a single general (global) optimality criterion f (x) = F(f 1 (x), f 2 (x), …, f d (x)) that is the union or so-called convolution of all particular (local) criteria f j (x) with or without taking into account an importance (weight) of each particular criterion, which is quite often defined from not always justified considerations;

28

2 Individual and Collective Decisions

• specification of a single optimality criterion that provides a guaranteed result, for example, the Laplace criterion of equivalence, criterion of optimism, criterion of caution, the Wald criterion of pessimism, the Savage criterion of risk, the Hurwitz criterion of weighted optimism-pessimism, or another; • specification of a single optimality criterion that allows us to find options, which are closest in some metric to the selected so-called reference point y0 = (y1 0 , …, yd 0 ) with values yk 0 of particular criteria desirable for a decision maker; • selection of one of the criteria as the principal criterion with additional restrictions for other criteria, tradeoff between criteria, for example, specifying the equality or weighted equality of particular criteria; • consequent solution of several single-criterion optimization tasks for particular criteria ordered by importance on the gradually reduced set of feasible options. To approximate construction of the Pareto boundary Y* under certainty, various iterative methods have been developed [1, 7, 14–17]. We mention methods of parametric programming with several objective functions (S. Gass, T. Saaty); method of interactive multiple criteria optimization using satisfactory goals (R. Benson); methods of sequential optimization with cut-off thresholds (V. Mikhalevich, V. Volkovich); STEP method (STEM) of linear programming with multiple objective functions (R. Benayoun, J. Montgolfier, J. Tergny, O. Larichev); methods of goal programming and multiple objective optimization (A. Charnes, V. Cooper; M. Salukvadze; V. Noghin); method of multiple goal optimization with efficiency maximization (V. Khomenyuk); method for successive achievement of the reference point (A. Wierzbicki); Pareto Step methods for interactive visualization of achievable goals (A. Lotov). Under conditions of probabilistic uncertainty for the given distribution functions of the random factor values, optimal solutions are sought using axiomatic and statistical methods [1, 14]. To obtain a guaranteed result, a single optimality criterion is specified, for example, the Bayes–Laplace criterion, the Germeier criterion of pessimism, criterion of minimum of standard deviation, criterion of minimum entropy. Various types of decision tree construction method are very popular, in which we find the optimal result, moving step by step from the final vertices of the tree to the initial root of the tree. Under conditions of fuzzy uncertainty in a presence of factors, which are caused by vagueness of representations, judgments, estimates in a natural language, tasks of fuzzy optimal choice arise, where variables, functions, constraints, relations can be linguistic quantifiers. Such tasks are solved by methods of fuzzy mathematical programming with a crisp and fuzzy goal functions and restrictions, methods of fuzzy multicriteria optimization and fuzzy optimal control [18–22]. Under conditions of complete or partial uncertainty in a presence of unknown factors, heuristic methods, methods and algorithms of game theory and adaptive control are used [2, 23]. In optimal choice tasks, a result the most preferable for a decision maker/expert is identified with the optimal option or options. At the same time, however, we must

2.3 Individual Rational Choice

29

remember that such options should always be considered only as some recommendations to a person, and not as final and unconditional “scientifically founded” results of solving the problem.

2.3 Individual Rational Choice Task of individual rational choice usually arises in ill-structured situations and is formulated as follows. There are real or hypothetically possible options O1 , …, Om , which are given initially or may appear under solving the problem. The properties of each option are evaluated by many criteria { K 1 , …, K n}that have numerical or verbal, continuous or discrete rating scales X l = xl1 , . . . , xlhl , l = 1, . . . , n. Ordinal scales are usually assumed to be ordered, for example, from the best gradations of ratings to the worst. The option Oi is represented either by an n-dimensional vector or a tuple xi = (x i1 , …, x in ) of estimates, where xil = K l (Ai ) is an estimate of the option Oi upon the criterion K l , or by one f (xi ) or d goal functions f k (xil ), k = 1, . . . , d that are functions of value (in case of certainty) or utility (in case of probabilistic uncertainty). The set of options forms the set X a ⊆ X 1 ×· · ·× X n of feasible options, or the set Y a = f (X a ) ⊆ Y1 × · · · × Yd = Rd of achievable goals. Taking into account DM preferences/expert knowledge, it is required: (1) to select one or more the best options; (2) to order all options, for instance, from the best to the worst; (3) to assign each option to one of the previously specified or generated decision classes [1, 24, 25]. In tasks of rational choice, we use numerical and verbal, objective and subjective, deterministic, stochastic, and vague information to describe the problem situation, solution options, criteria for their evaluation, and DM preferences/expert knowledge. According to a number of criteria, we distinguish one-criterion and multicriteria tasks. Practical methods of rational choice should take into account the characteristics of humans in the processing information and satisfy the following requirements: • maximum closeness of means to the natural language and professional subject area, which are used to describe the problem situation, express preferences of decision makers, knowledge of experts; • mathematical correctness of a method, the use of only such mathematical and logical procedures for processing information that are valid for the relevant quantitative and qualitative variables; • psychological validity of a method, its correspondence to the capabilities and characteristics of humans to process information, in particular, verification of subjective information received from a person for consistency, search for and elimination of contradictions in the case of individual choice, taking into account and using inconsistent individual judgments in the case of collective choice;

30

2 Individual and Collective Decisions

• transparency of method for a decision maker/expert, ability to control all stages of the problem solving process, to receive explanations of the final and intermediate results. We distinguish the following groups of methods of individual rational choice depending on the type of information, methods for converting and processing data [1, 9, 26]: • quantitative indicators are specified, options are compared by numerical estimates of significance (heuristic and axiomatic methods for assessing multidimensional value or utility of decisions); • qualitative indicators are specified, which are transferred into numerical estimates of the option significance (methods of analytical hierarchy, fuzzy choice); • quantitative indicators are specified, options are compared without calculating numerical significance (methods of outranking relation, choice functions); • qualitative indicators are specified, options are compared without calculating numerical significance (methods of verbal decision analysis). Ordering of options and selection of the best ones are carried out using methods based on the functional model of DM preferences, in which an effectiveness or quality of solution is characterized by the goal real function that depends on many particular functions. In heuristic methods, the total value v(Oi ) or the total utility u(Oi ) of the option Oi is defined as the additive convolution of partial functions into the form of a sum: v(Oi ) =

n ∑

wl vl (Oi );

(2.1)

l=1

multiplicative convolution into the form of a product: v(Oi ) =

n .

[al vl (Oi )]wl ;

l=1

additive-multiplicative convolution into the form of a polylinear function: v(Oi ) =

n ∑ l=1

bl vl (Oi ) +

n ∑ ∑

bl f vl (Oi )v f (Oi ) + · · ·

l=1 f >l

+ b1...n v1 (Oi )v2 (Oi ) . . . vn (Oi ).

(2.2)

Here wl > 0 is a weight of the l-th criterion characterizing its importance or significance for a decision maker, al > 0, bl , bl... f are the scaling coefficients. The numerical values wl , al , bl , bl... f can be assigned directly by a decision maker or calculated by some procedure based on the estimates of a decision maker [1]. The form of each particular function vl (Oi ) or u l (Oi ) is either directly determined by a DM himself

2.3 Individual Rational Choice

31

or defined by an expert, or is calculated in any way, using information received from a decision maker/expert. In additive convolution, it is∑ usually assumed that weights of particular criteria are normalized by the condition l wl = 1. A typical representative of heuristic methods is the popular simple multi-attribute rating technique (SMART, W. Edwards) and its analogues [14, 27]. In these methods, the total value of an option is calculated as a weighted sum (2.1) of particular values. In fact, heuristic methods are counterparts of multicriteria optimization methods. Axiomatic methods are based on some formalization of rational behavior of a person in the choice situation, that takes into account uncertainty of “nature”, “human”, “goals”. When ordering or selecting of options, a rationally acting person seeks to maximize the overall utility, with which he/she evaluates the obtained consumer goods. The utility function u(Oi ) is defined by a certain system of axioms, a validity of which is established on the basis of information about preference of the compared options received from a decision maker. The most famous are the expected utility theory (J. von Neumann, O. Morgenstern) [2], multi-attribute utility theory (MAUT, R. Keeney, H. Raiffa, P. Fishburn) [3, 28, 29], multi-attribute value theory (MAVT) [30, 31], prospect theory (D. Kahneman, A. Tversky) [32, 33]. Note that procedures that must be applied to verify an implementation of the axioms of rational choice, to evaluate values of utility functions and probabilities of options, are quite time-consuming for a person. The generalized criterion used to determine the best option is often given by convolution of many particular numerical criteria into the form of a weighted sum (2.1). However, various problems can arise when evaluating objects upon numerical criteria and appointing criteria weights. So, the use of a weighted sum of criterial assessments is correct only if criteria are pairwise independent by preference. Appointment by a decision maker/expert of the initial indicator weights is a subjective procedure that does not have strict justifications. Construction of a generalized value/utility function with a large number of particular criteria is associated with noticeable labor costs of a DM. It is shown that multidimensional utility methods are sensitive to measurement accuracy. The application of methods, that use a weighted convolution of criteria to solve multicriteria tasks, does not allow explaining the results, since it is impossible to restore the original data from aggregated indicators. In theory of choice functions, DM preference are represented by some function that reduces the set of initial options to the set of selectable ones. The set reduction is performed according to given rules, taking into account the available information about properties of options [34, 35]. Choice functions are a fairly universal tool that makes it possible to solve complex substantive problems of choice using models that combine different methods and mechanisms, including binary relations and multicriteria estimates. Possibility of constructing a variety of choice mechanisms allows to describe both classically rational and other choice models, which include, in particular, non-transitivity of preferences and failure of choice. The sequential reduction of the Pareto set of optimal decisions proposed by V. Noghin is provided when certain axioms of rationality and consistency of DM preferences hold. Using additional information obtained from a decision maker,

32

2 Individual and Collective Decisions

one compensates the original particular criteria in accordance with their relative significances and constructs new criteria of optimality [36, 37]. Methods based on pairwise comparisons of objects are popular for ordering of options in general or upon many criteria. Options are completely ordered if all pairs of options are comparable and preferences of a single decision maker are transitive. If some options are incomparable and/or there are several decision makers with different preferences, then ordering will be partial. If all the options are incomparable, then we cannot arrange them. The advantage of pairwise comparisons is the relative simplicity of obtaining information from a person about compared objects and its processing. But this simplicity also creates serious shortcomings of such methods that decrease their expressive capabilities. So, comparing options, sometimes we use the numerical estimates, which replace the verbal estimates. However, as psychological studies show, it is difficult for a person to compare numerically options qualitatively different; he/she often makes mistakes. Specification of any numerical scale for estimating elements of the pairwise comparisons’ matrix, which represent DM judgments, usually does not justify, but affects the final result. In addition, with a large number of compared options and criteria, a possibility of inconsistency and non-transitivity of judgments of even a single decision maker is high. Hierarchical methods allow us to order the final collection of options, evaluated upon many quantitative and qualitative criteria, and find the best option Oi , which has the greatest value of a total numerical function. Methods include multilevel “top–down” decomposition of the choice problem into goals, criteria, options; assessment of the comparative preference of hierarchical structure elements with respect to the overlying level on the basis of a unified scale; calculation of the option value (priority) by pairwise comparison of structure elements and “down–up” aggregation of partial assessments of options, criteria, participants, starting from the lowest level and ending with the highest level; if necessary, assessment of consistency of DM preferences/expert knowledge. Options are ranked by a generalized indicator of priority. Analytic hierarchy methods such as analytic hierarchy process (AHP, T. Saaty) [38], multiplicative analytic hierarchy process (MAHP, F. Lootsma) [39] and their numerous modifications have wide practical applications. In addition to shortcomings common to methods of pairwise comparisons, the fundamental drawback of analytic hierarchy methods is the uniform comparison of different elements of hierarchical structure with a unified scale, where numbers are assigned to verbal estimates. Methods are also very sensitive to the choice context. Using additive or multiplicative convolutions of individual assessments of elements upon many criteria, an addition or exclusion of options can significantly change the final ordering of options. The approach that is based on the paradigm of computing not with numbers, but with words, proposed by L. Zadeh [40–42], covers the relational and production models of rational choice. Ranked or classified options are described by qualitative indicators with linguistic scales in natural language. Within the paradigm of “calculus with words”, various methods of individual and group decision making have been

2.4 Verbal Decision Analysis

33

developed (J. Kacprzyk, M. Fedrizzi, S. Zadrozny, others) [43–45], in which preferences and choice rules are specified with linguistic quantifiers (most, much more, almost all, …) and are usually processed by tools of fuzzy logic. Using linguistically quantified propositions provides more flexible than in conventional methods, the expression of DM judgments and expert opinions. At the same time, arbitrary digitization of linguistic scales and the subsequent defuzzification of the calculation results can affect the final solution of choice problem. The outranking approach proposed by B. Roy implements the relational model of rational choice, allows us to order or sort the really available options, which are evaluated upon many quantitative criteria that have point rating scales and different weights given by a decision maker [5, 6]. Value functions are not built, and options are compared in pairs by the outranking relation, which is determined using special indexes of concordance (agreement) and discordance (disagreement). Values of these indices are used to rank and classify options. A large family of outranking methods for elimination et choix traduisant la realité (ELECTRE, B. Roy, D. Bouyssou, Ph. Vincke, and others) have been developed [5, 6, 25, 31]. DM preferences are not given a priori, but are formed and refined in the process of stage analyzing and solving the problem. A decision maker/expert has an opportunity to participate and intervene in the process of solving the problem, actively influence on obtaining result. However, the use of non-transitive and/or incomplete binary relations complicates ordering of options. In some cases, cycles may occur in rankings, which requires the introduction of additional procedures to exclude them. Direct sorting of objects by given classes is one of the most popular classification techniques due to simplicity of usage. Each object evaluated by a single numerical criterion immediately falls into one of the specified classes. Methods that are based on a weighted convolution of criteria (2.1) are very common. In the interactive classification procedure (M. Köksalan, C. Ulu) [46], the preferences of a single decision maker are described by a linear utility function, which is a weighted sum of many scalar criteria. In the tool for ordinal multi-attribute sorting and ordering (TOMASO, M. Roubens) [47], options are arranged by calculated values of the Choquet integral, which aggregates families of discriminant functions. The method for classifying multicriteria options using rough sets (S. Greco, B. Matarazzo, R. Slowinski) [48] is based on the production model of decision maker preferences. Preferences are represented by collections of decision rules that allow to assign options to specified classes with varying degrees of certainty. The method operates with a sufficiently large number of decision classification rules, difficult for direct analysis of DM, and requires preliminary tuning (training) on specially created data arrays.

2.4 Verbal Decision Analysis Verbal decision analysis (VDA) implements the relational model of rational choice and allows us to order or classify a finite collection of options, evaluated upon many

34

2 Individual and Collective Decisions

qualitative criteria, without determining the numerical value of options [7–11, 49]. To describe the problem situation and to measure DM preferences/expert knowledge, only verbal formulations of quality grades on the criteria scales are used. Numerical evaluations of criteria importance and options’ values are not calculated or applied. Verbal indicators are not transformed into numerical ones. This is why this approach got its name and differs from some other decision methods using natural language, in particular from the approach based on Zadeh’s computing with words [40, 41]. Identification of DM preferences/expert knowledge in verbal analysis methods has the following peculiarities [1, 8]: • usage, at all stages of analysis and solving the a problem, only those operations for information conversion that preserve the qualitative nature of data without any conversion to numbers; • presentation to a decision maker/expert for comparison of options, which are described in a natural language using detailed verbal formulations of rating gradations on criteria scales; • comparison and ordering of option estimate tuples, without the participation of a decision maker/expert, in accordance with the “objective” dominance relation, which is determined by the given order of rating gradations on criteria scales; • comparison and ordering of option estimate tuples, which are formally incomparable in terms of the “objective” dominance relation, by a decision maker/expert in accordance with the “subjective” relations of strict superiority, non-strict superiority, equivalence or incomparability; • checkup of information received from a decision maker/expert when comparing options for consistency, that is, transitivity of the subjective relations of superiority and equivalence, identification and elimination of errors and contradictions; • logical justification for decision rules presented with verbal attributes (estimates upon criteria), explanation of the obtained intermediate results and the final decision. The verbal methods allow us to solve all typical problems of rational choice. To select the best multi-attribute option and order options, one uses the following methods: closed procedures nearby reference situations (ZAPROS I, II, III), pair compensation (PARC), scale of normalized ordered differences (SHNUR), compensation for pair comparisons (COMPASS). To classify multi-attribute options such methods are used as ordinal classification (ORCLASS), differential classification (DIFCLASS), step-by-step classification (STEPCLASS), ARIADNA, chain interactive classification (CYCLE), classification by non-ordered scales (CLANCH), classification of real alternatives (CLARA), nominal-ordinal classification (NORCLASS). Let us describe in more detail two the most well known methods of verbal decision analysis [9]. The ZAPROS I method (O. Larichev, L. Gnedenko, Yu. Zuev) is intended for ordering all possible combinations of estimates of the options O1 , …, Om nearby two reference situations that may or may not correspond the available options. To do this, we use the relations of dominance Pdom , subjective superiority R{sub and subjective } equivalence I sub in the space X = X 1 ×· · ·× X n of the scales X l = xl1 , . . . , xlhl , l =

2.4 Verbal Decision Analysis

35

1, . . . , n of criteria K 1 , …, K n , the gradations of which are ordered from the best to worst as xl1 . xl2 . · · · . xlhl . Using this method, we find a rule that allows to rank the available options according to their multicriteria estimates. The method includes the following steps. l 10 . Form, ( in the space X, the)subset X of hypothetical options—estimate tuples fs 1 fs 1 x = x1 , . . . , xs , . . . , xn that differ from the best reference situation x l = ) ( 1 f x1 , . . . , xs1 , . . . xn1 only in one grade x s s on the rating scale of the criterion K s , f s = 2, . . . , h s ; s = 1, . . . , n. The transition from a better reference situation to options with worse grades diminishes the overall quality of solution. 20 . Compare tuple pairs x fs and x ft , that have different estimates upon one criterion (either by K s or by K t ) and the same (best) estimates x q 1 (q /= s, t) upon all other criteria, according to the relations Pdom , Rsub , I sub . The “objective” relation of dominance Pdom is determined in the space X by the grade orders on the rating scales of all criteria X l = {x l 1 , …, x l hl }, l = 1, …, n, initially given by a decision maker. The “subjective” relations of superiority Rsub , and equivalence I sub are formed during comparing formally incomparable options, which is carried out by a special technique for interviewing DM. 30 . Order the tuple pairs x fs and x ft , compared by the relations of dominance Pdom , superiority Rsub , and equivalence I sub . Construct a pair ordinal scale Qst l for pairs of criteria K s and K t nearby the best reference situation xl , which is a ranking of all hypothetically possible tuples that differ only in grades on the rating scales of criteria K s and K t . 40 . Combine pair ordinal scales Qst l for all pairs of criteria, and build a joint ordinal scale Ql for all criteria nearby the best reference situation xl , which is a ranking of all possible combinations of criteria estimates that start from the best one xl . Each combination of estimate grades gets its own rank on the scale Ql . Check DM preferences for consistency. h 50 . Form, ( in the space X, the )subset X of hypothetical options—estimate tuples g x gs = x1h 1 , . . . , xs s , . . . , xnh n that differ from the worst reference situation x h = ( ) g x1h 1 , . . . , xsh s , . . . , xnh n only in one grade xs s on the rating scale of the criterion K s , gs = 1, . . . h s − 1; s = 1, . . . , n. 60 . Combine pair ordinal scales Qst h for all pairs of criteria, and build a joint ordinal scale Qh for all criteria nearby the worst reference situation xh , which is a ranking of all possible combinations of criteria estimates that end with the worst one xl . Each combination of estimate grades gets its own rank on the scale Qh . Compare the joint ordinal scales Ql and Qh , check additionally DM preferences for consistency. 70 . Associate each really available option Oi , represented by a tuple x i = (xi1 , . . . , xin ) of multicriteria estimates, with a vector r i = (ri1 , . . . , rin ), consisted of ranks of the corresponding combinations of estimates on the scale Ql . Range all real options by the relation of the lexicographic order of rank vectors, in which the ranks are arranged in ascending order of their values.

36

2 Individual and Collective Decisions

The ORCLASS method (O. Larichev, E. Moshkovich) is intended for complete and consistent sorting of all possible combinations of estimates of the options O1 , …, Om into a small number of predefined decision classes D1 , …, Dg arranged in descending order of preference. Using the method, we find decision rules, which specify, in the space X = X 1 × · · · × X n of the criteria scales K 1 , …, K n , boundaries of the classes that allow to classify options according to their multicriteria estimates. The method consists of the following steps. 10 . It (is initially )assumed that, in the space X, the best reference situation x l = x11 , . . . , xn1 belongs to the decision class D1 , and the ) ( most preferable

worst reference situation x h = x1h 1 , . . . , xnh n belongs to the least preferable class Dg . Assign all other options to one of the classes, using a special technique for interviewing a decision maker. Depending on the DM answer about a membership of the presented tuple to a certain class, this membership is extended by a dominance to other estimate combinations that dominate the given tuple or are dominated by the given tuple. 20 . At each step of the classification procedure, search for the most informative tuple, which is presented to a decision maker for assigning it to one of the classes. For this, calculate the so-called indexes of tuple informativeness, that show how many other tuples will be classified at the same time if a DM assigns the presented tuple to certain class. The most informative tuple provides, firstly, the simultaneous classification of as many tuples as possible; secondly, the most informativeness of any possible DM answer, which is achieved with the least difference in the numbers of tuples assigned to different classes. If there are the several most informative tuples, any of them can be presented to a DM. After each step, recalculate the information indexes of remaining unclassified tuples. 30 . At each step of the classification procedure, check consistency of DM preferences, identify wrong answers of a decision maker, and eliminate the arisen contradictions with the previously established membership of tuples to the classes. 40 . Construct a complete and consistent classification of estimate tuples, in which each tuple is assigned to one particular class. Find the upper and lower boundaries of each class. These boundaries are respectively collections of the most preferred tuples not dominated by other tuples of this class, and the least preferred tuples not dominating other tuples of this class. Component values of all tuples, which belong to a certain class, lie between the upper and lower boundaries of this class. The class boundaries establish decision rules, with help of which we can quickly and easily assign available multi-attribute options O1 , …, Om to the corresponding class, as well as explain the obtained classification to a decision maker. When specifying the boundaries of classes, there is no need to have information about whether each tuple belongs to a particular decision class. Note that a complete and consistent classification of all possible combinations of estimate tuples is, in fact, a personal base of expert knowledge of the production type, construction of which is one of the key problems in artificial intelligence.

2.5 Aggregation of Individual Preferences

37

A characteristic feature of verbal decision analysis is a dialogue with a decision maker/expert in a language familiar to a person, maximally closed to his/her professional activity. A decision maker/expert is actively involved in the formulation, analysis and solution of the problem, can differently and in detail express and adjust his/her preferences during solving the problem, generate new options. Intermediate and final results of the problem solution, decision rules are described by verbal values of the initial attributes (estimates upon criteria), which allows us to give their explanation in the language familiar to a DM/expert. Thus, verbal decision analysis to the greatest extent meets the requirements, presented for methods of solving ill-structured problems of rational choice. In general, verbal methods are more “transparent”, less sensitive to measurement errors and less time consuming for a person. However, in comparison with other methods of rational choice, they have less “resolution capability”, since a relatively large part of options may remain incomparable. The success and effectiveness of applying the verbal approach is largely determined by successful and “correct” structuring of the problem under consideration, which depends on experience and professionalism of analysts and consultants, usually participated in formulation and solution of tasks.

2.5 Aggregation of Individual Preferences In practice, decision situations, in which results of the choice are determined by judgments of not one single person but many people, are just as common as situations of individual choice. Decision makers in group choices are voters, experts, and members of elected bodies, committees, juries. Such persons acting together are called a decision making group (DMG). As collective or group choice, we understand decision making procedure based on joint consideration of individual preferences of DMG members, independent of each other. Collective choice is fundamentally different from individual choice. The main feature of collective decision making is the need to aggregate individual and, as a rule, mismatching preferences and knowledge of many independent participants. From a substantive point of view, the problem of group choice is how the most “justly”, “reasonably” and “correctly” make a transition from options, that are preferable for individuals, to options, that are preferable for an actor group as a whole. In group decision making, there may be identical, similar, differing and contradictory judgments of several participants. When it is required to coordinate the individual preferences of participants, and there is such an opportunity, we will talk about consistent or compromise collective choice. When individual preferences of participants are inconsistent and, moreover, contradictory, and when a compromise is impossible for any reason, we will call such collective choice inconsistent or non-compromise. Difference in individual opinions is determined by many reasons. Each actor, making a decision, pursues, generally speaking, his/her own interests and goals, has own value systems, rules and criteria for choosing the best option, which can

38

2 Individual and Collective Decisions

either coincide or distinguish from opinions of other DMG members. Inconsistency of individual judgments can be caused by an ambiguity of understanding of the problem being solved, specificity of knowledge, different assessments of the same aspects and other circumstances. In such situations, paradoxes may arise associated with the non-transitivity of collective judgment that combines transitive individual judgments. All these features should not be eliminated, but should be taken into account when aggregating individual preferences. Simultaneously, these features should allow us to consider inconsistent subjective preferences, including conflicting ones, of all participants in the collective choice process, without requiring a consensus of opinions. Collective choice, as well as individual choice, combines subjective and objective aspects. A preference of each individual decision maker is subjective and determined by the value system inherent in a given person. At the same time, an integration of several individual preferences into one collective preference should be carried out in the possibly most objective way, using clearly defined formal procedures accepted by all members of the group and not influenced by any members of the group. Thus, group choice includes two categories of procedures: aggregation of individual preferences into an overall assessment of the decision quality, on the basis of which the most preferred options are searched, and technology of DMG work for developing a collective decision. Procedure of aggregation of individual preferences, which ensures mutual consideration of individual interests, consists of two parts. Firstly, when each DMG member evaluates options upon many criteria, it is necessary to synthesize an integral assessment that summarizes generally preferences of a given actor. Secondly, it is necessary somehow to measure individual judgments of several actors and combine them in a collective judgment. At the same time, it is assumed that an understanding on what is considered as the “best” collective decision, common for all DMG members, should be worked out. Depending on context of the choice situation, individual preferences are aggregated in different ways. It is possible, if this is correct, to summarize and average individual assessments of individual decision makers, regardless of whether their opinions coincide or not. It is possible to coordinate different points of view taking into account the balance of interests of DMG members and look for a compromise decision. It is possible to develop a collective point of view that takes into account various, including inconsistent and contradictory, opinions of all group members without seeking a compromise between them. Construction of aggregation mechanism of individual preferences is an essential component of collective decision making. Since each DMG member makes decisions based on own interests and goals, the maximum possible number of independent individual preferences is equal to the number of group members. If the interests of participants coincide or are close, they can create coalitions. Then each coalition can be considered as a separate independent member of the group. Thus, the total number of individual preferences is decreased.

2.5 Aggregation of Individual Preferences

39

Indicate the most well known principles of coordinating participants’ interests in group choice, which allow to identify the best, according to a collective opinion, options [1, 50]. The Cournot principle—all DMG members have different own interests and make their choice independently of each other. That is, a number of coalitions is equal to a number of group members. In this case, it is not profitable for any participant to change his/her preference, as this can only worsen the decision made. The Pareto principle—all DMG members have common interests and make their choice coherently. That is, there is one single coalition. In this case, it is not profitable for all participants together to change their preferences, as this can only worsen the decision made. The Edgeworth principle—all DMG members are members of coalitions, a number of which can be any (from one to the number of group members), and make their choice in the interests of their coalition. In this case, it is not profitable for any coalition to change its preference, as this can only worsen the decision made. Models of aggregation of individual preferences formalize the concept of collective rationality [1, 35]. The fundamental difficulty in aggregating individual preferences lies in defining the concept of rationality of a collective decision, which, at individual choice, is interpreted as non-contradictory or transitivity of subjective judgments of a single decision maker/expert. In the relational model, individual preferences, expressed by binary relations R , . . . , R , are transformed into a )group binary relation. The relational aggre( gation rule R agg = F R , . . . , R is a rule, according to which the ranking R agg is selected from the family of possible rankings as the most preferable collective judgment, taking into account individual judgments of all DMG members. An example of such a group ranking is the Kemeni median R *, which is determined by the condition for the minimum sum of distances between individual rankings and reflects a possible compromise or consensus between individual preferences of participants. In the functional model, individual preferences, expressed by choice functions C (X ), . . . , C (X ), are transformed into a group choice )function. The func( tional aggregation rule C agg (X ) = F C (X ), . . . , C (X ) is a rule, according to which the function Y agg = C agg (X ) is selected from the family of possible functions as the most preferable collective judgment, taking into account individual judgments of all DMG members. In the relational-functional model, individual preferences, expressed by binary

, . . . ,)R , are transformed into a group choice function Y agg = relations ( R , . . . , R , which represents the most preferable collective judgment of F R all DMG members. In the production model, individual preferences, expressed by one or several

, are transformed into a group choice rule P agg = choice ( rules P ), . . . , P ,...P , which represents the most preferable collective judgment, F P taking into account individual judgments of all DMG members. The relational and functional models of aggregation of individual preferences are more studied. The relational-functional and production models are less developed.

40

2 Individual and Collective Decisions

Technology of collective work of DMG members should take into account a diversity of substantive, organizational, and psychological factors. Behavior of group members, their preferences and, ultimately, the result of a collective choice are influenced by a nature of the problem being solved, knowledge and experience of participants, their emotional state, rules for discussing the problem, openness and sequence of expressing opinions, possibility of creating coalitions, peculiar properties of voting procedures, and so on. When making a collective decision, both individuals and coalitions can adhere to different styles of behavior, namely: the status quo, confrontation, rationality. In the case of the status quo, DMG members weakly interact with each other, trying to maintain the current situation. Such relationships are characteristic for participants of economic market. In a confrontation, DMG members act in such a way as to cause maximum damage to other participants, considering them as opponents. However, they can harm themselves. Such relationships are characteristic for participants in local hostilities, conflict situations, and sports games. DMG members, behaving rationally and acting in their interests, seek to get the maximum benefit for themselves, without necessarily causing harm to other participants. In some cases, it is beneficial for DMG members to unite, becoming allies, but sometimes it is more profitable to remain adversaries. Such relationships are inherent for companies operating in the same sector of economy; countries involved in the global conflict. In addition, each DMG member may have a different degree of influence on decision making, the source of which is personal qualities, administrative position, property status of a person. The following types of influence are distinguished: rewarding (driving motives are award, bonus, encouragement, incentive); terrifying (driving motives are violence, threat, punishment); conditioning (driving motives are education, cultural traditions, religious canons, suggestion). Many of these aspects relate to difficult and weak-studied issues and are usually not taken into account in models of collective choice.

2.6 Collective Choice Task of collective or group choice is understandable as follows. There is a group of decision makers consisting of t persons who consider possible options O1 , …, Om for solving the problem, a number of which can be either finite or infinite. Each of DMG members, independently of others, evaluates all options in accordance with his/her individual preferences. Formation of coalitions is possible, which include participants with coinciding interests. It is assumed that rules for organizing and conducting procedures to compare and choose options, which are common to all participants, have been developed. Based on individual preferences of all members of the group and/or coalitions and, if necessary, taking into account a degree of their influence or competence, it is required: (1) to select one or more the best options; (2) to order all options; (3) to assign all options to the decision classes [1, 26, 50].

2.6 Collective Choice

41

A task of collective choice largely coincides with tasks of individual optimal and rational choice, but is more difficult due to a participation of several actors. Additional complexity is conditioned by the need to aggregate individual preferences or knowledge of several DM/experts, ambiguity of transition from individual judgments to a single collective opinion of DMG, difficulty to formalize which choice can be meaningfully considered as a rational collective decision. By the number of members of decision making group, we identify the tasks of collective choice with two and with many (more than two) participants, who may have the same or different influences or competencies. If options and preferences of participants are described by deterministic, stochastic, vague or uncertain variables, then we speak of collective decision making under conditions of certainty, probabilistic, fuzzy or complete uncertainty. By the number of criteria used to evaluate options, we subdivide the tasks of collective choice into single-criteria and multicriteria. When there are many criteria, they can be of the same or different importance. Each criterion has its own rating scale: continuous or discrete, numerical or verbal. Ordinal scales are usually assumed to be ordered from the most preferred (best) gradations to the least preferred (worst) gradations of estimates. In the late seventeenth century, the French scientists J.-C. de Borda and M. de Caritat (marquis de Condorcet) began to study the problem of group choice as a task of voting. However, despite its considerable age, the problem of collective choice has been investigated and developed much worse than the problem of individual choice. This is mainly due to difficulties of a meaningful statement of the group choice problem and complexity of the formal aggregation of individual preferences into a unified resulting collective preference. The ideas of Condorcet and Borda influenced significantly on formation of the conception of a “reasonable” collective choice. According to Condorcet, a rational individual person establishes the option value (the voting candidate), selected from a given set, at first-hand by direct pairwise comparing this option with all other options, regardless of value of other options. Such information is called direct. By Condorcet, the most preferable for all members of DMG (voters) is the option that is superior to any of its competitors in the relative majority of votes. Unlike Condorcet, Borda determined a preferability of each option, based on information about value of all options from a given set, which expresses an aggregated collective preference taking into account individual preferences of all DMG members (voters). Such information is called indirect. Thus, Borda suggested using all available (direct and indirect) information for choice, while Condorcet would use only direct information. Therefore, although each individual person, rational by Condorcet, acts separately, a group of persons on the whole may no longer have this quality. On the contrary, individual persons, rational by Borda, remain the same in a collective choice. The loss of a part of information leads to the well-known paradox of Condorcet inconsistency, when a non-transitive collective preference appears. Nevertheless (and this is amazing!), for a long time, the Condorcet concept played the leading role in formation of the concept of collective rationality and became the basis of many approaches to a group choice.

42

2 Individual and Collective Decisions

There is no established classification of collective choice methods. Conventionally, we distinguish voting methods, axiomatic, game and expert methods, group analogues of multicriteria choice methods. As in the case of individual choice, the collective choice methods can also be divided into subgroups depending on a nature of the measured indicators (quantitative or qualitative) and ways for subsequent comparison of options with or without calculating their numerical significance. Voting is widely used in practice as a way of collective decision making, expressing the will of the majority. Voters cast their votes to choose the best candidate. There are many diverse voting systems, which differ into the forms of organization and conduct, election mechanisms, methods for identifying individual preferences of voters, procedures for collecting votes and processing results, rules for votes’ calculation taking into account the equality or inequality of voters, rules for determination of winners, types of the final result [1, 50]. In addition to J.-C. de Borda and N. de Condorcet, the most famous voting procedures were proposed by C. Dodgson (Lewis Carroll), E. Nanson, C. Coombs, A. Copeland, P. Simpson, P. Fishburn. The Borda procedure (1784) was historically the first voting system that provides all participants in a collective choice an opportunity to express their individual preferences and allows taking into account not only interests of the majority, but also interests of the minority. The Borda voting uses a ranking procedure for taking into account the voter opinions. Each participant (voter) ranks the options (candidates) O1 , …, Om by preference. In each individual ranking, the first option gets m − 1 points, the second∑ option gets m − 2 points, the last option gets 0 points. The Borda score f B (Oi ) = ts=1 b (Oi ) is calculated for the option Oi , where b (Oi ) is the Borda score in the individual ranking of the participant s. All options are ordered in decreasing order of the Borda score f B (Oi ). The best option O* is determined by the maximum value max1≤i≤m f B (Oi ) of the Borda score. Axiomatic theories of collective choice are aimed at founding the conception of collective rationality. These theories are based on different models of aggregation and coordination of individual preferences. Theories of collective choice have played an important role in understanding possibilities of creating an “honest” and “fair” voting system as an “objective” way for expressing the common opinion of many independent individuals. Concepts of rational collective choice, based on the relational model of aggregation of individual preferences, were independently proposed by A. Bergson and P. Samuelson. Later, these ideas were developed by K. Arrow in theory of social choice [51], and generalized by P. Fishburn [3] and A. Sen [52]. Arrow found that with a finite number of participants whose preferences correspond to the axiomatics of rational choice, it is impossible to build a consistent relational rule of collective choice that combines individual rankings. Therefore, it is impossible to create a “fair” election system that excludes the voting paradoxes. However, as Fishburne proved later, group preference becomes valid with an infinite number of participants and an infinite number of options. The voting paradoxes do not also arise if the axiomatics of rational collective choice is based on the Borda concept. According to this concept, the common significance of each option is determined by the results of individual comparisons of all

2.6 Collective Choice

43

pairs of options. Such axioms of rational choice were formulated, for example, by V. Levchenkov [53, 54]. In the Goodman–Markowitz procedure [50], an aggregation rule is a rule of places’ sum. According to this rule, a rank r agg Oi of the option Oi in the collective ranking R agg is defined as the sum r agg (Oi ) =

t ∑

r (Oi )

(2.3)

s=1

of ranks r (Oi ) of the option Oi in individual rankings R , . . . , R of all DMG members. One of the popular approaches to group choice is based on computing with words using fuzzy logic. Thus, in the methods of Kacprzyk et al. [43–45], an aggregated collective preference given by a fuzzy relation is formed taking into account a ‘soft’ measure of consensus either by constructing a social fuzzy preference or directly from individual preferences also given by fuzzy relations. Collective choice can also be made by numerical value functions v (Oi ) or utility functions u (Oi ) of the option Oi , s = 1, …, t, which characterize the option preference for an individual member of DMG. A rule for aggregation of individual preferences is determined by additional conditions that define a contribution of each individual actor to collective preference. So, for example, in the Sen of an option is represented by a vector axiomatic theory [52] the aggregated value ( ) v agg (Oi ) = v (Oi ), . . . , v (Oi ) , components of which are individual value functions v (Oi ). We consider the best options to be the effective or weakly effective options that dominate other options by the Edgeworth–Pareto or Slater. According to the Keeney–Raiffa theory [28], a nature of multidimensionality of the aggregated utility function uagg (Oi ), given on a set of options, generally speaking, is insignificant. A multidimensionality of the function uagg (Oi ) can be caused by the presence of many independent criteria, upon which a single decision maker evaluates particular utilities ul (Oi ) of the option Oi , and by the presence of several independent decision makers, each of which gives its own individual holistic estimate of utility u (Oi ) of the option Oi . The polylinear function of multidimensional utility uagg (Oi ), expressed by formula (2.2), where ul (Oi ) should be replaced by u (Oi ), satisfies all conditions of the Arrow theory as a rule for aggregating individual preferences. Axiomatic theory of rational collective choice, based on the functional model of aggregation of individual preferences, was developed by M. Aizerman and F. Aleskerov [34, 35]. In this theory, similarly to the Arrow theory, there is no local functional rule for aggregating individual preferences that guarantees the classical rationality of collective choice function with the classically rational functions of individual choice of all participants. Game theory, in fact, is also an axiomatic theory of group decision making with two or more participants. A game presents a mathematical model of the choice situation, in which each of the participants (players) pursues its own interests, as a

44

2 Individual and Collective Decisions

rule, different from the interests of others, and acts independently of other participants or together with some of them, uniting in a coalition when the goals coincide. The first systematic presentation of the game theory was given by J. von Neumann and O. Morgenstern applying to a choice in conflict situations [2]. Later, bilateral and multilateral games with opposite and coinciding interests of the players came up, that are: antagonistic, non-coalition, coalition, cooperative games [23, 55]. Various principles of game optimality were proposed, which are based on searching for the extremum of some goal function or functions with additional requirements for non-dominance, stability, fairness, symmetry of the solution results. There are classes of games for which several principles of optimality are satisfied. However, the introduced principles of game optimality are not always practicable, and finding rational strategies for players is a rather difficult computational task, exhaustive solutions of which are obtained only for certain classes of games. Expert methods of collective choice, in which experts and a leader of a higher rank (super-DM) take part, are heuristic, in general [1, 56]. Experts make a structuring of the problem, form a list of possible solutions to the problem. Using different methods, experts search for the most preferred option, analyze consequences of the decision made. Super-DM determines the criteria for evaluating options, considers individual conclusions received from experts and the results of a preliminary analysis of the problem, makes the final choice. Usually, super-DM is responsible for a selection of experts and for the final result of the problem solution. We note, however, that many expert methods are characterized by a desire to eliminate inconsistency and contradiction of individual judgments, to replace a collection of many opinions with a single point of view that is most consistent with all judgments or expresses some “averaged” opinion. Generally speaking, all these features of methods influence the results obtained.

2.7 Group Multicriteria Choice Task of group multicriteria choice, which belongs to the most difficult decision making problems, is formulated as follows. There are m options O1 , …, Om for solving the problem, each of which is evaluated independently by t{experts upon } n criteria K 1 , …, K n . Each criterion K l has its own rating scale X l = xl1 , . . . , xlhl , continuous or discrete, numerical or verbal, hl is a number of gradations on the scale of the criterion K l , l = 1, …, n. Ordered scales are usually assumed to be ordered from the most preferable (best) to the least preferable (worst) grades. Based on preferences of a super-decision maker and taking into account assessments of experts—DMG members, it is required: (1) to select one or more best options; (2) to order all options from the best to the worst; (3) to distribute all options into the decision classes D1 , …, Dg , which differ in properties and can be either ordered or not ordered by preference. A membership of the option Oi , i = 1, …, m

2.7 Group Multicriteria Choice

45

to the class Df , f = 1, …, g is described by the sorting attribute S with a rating scale R = {r 1 , …, r g }, which can be considered as any additional qualitative attribute of the option [1, 26]. Collective multicriteria choice has a number of specific features. Experts, as DMG members, can generally be unequal, have different competencies and/or influences, which must be taken into account when aggregating individual preferences. Criteria can also have different importance (weight) for a super-DM and for each participant. Plurality of preferences of DMG members, participating in solving the problem of group multicriteria choice, is another additional source of information inconsistency. Different versions of the option Oi arise, for example, when the option is evaluated by t experts upon many criteria K 1 , …, K n , or the option characteristics are calculated t times by several methods K 1 , …, K n , or measured t times by several tools K 1 , …, K n . Moreover, all versions of an option/object should be considered and analyzed on the whole. If each expert arranges all options, then there are t individual rankings of options that usually do not coincide with each other. Then it is required to build one group ranking of options. If each expert acts as a teacher and assigns each option Oi to one of the decision classes, then there are t individual expert rules for sorting options that are usually not consistent with each other. Then it is required to formulate one or several fairly simple rules for group classification of options, which, taking into account individual rules of experts, allow us to identify consistent and contradictory classified options. When increasing a number of options, criteria and DMs/experts, a number of possible comparisons of estimates sharply grows. In such cases, a construction of the final ordering or classification of options is significantly complicated due to possible inaccuracies and contradictions in individual judgments. In addition, when processing large volumes of numerical and verbal data that characterize options, only such operations of information transformation should be applied, which do not cause unreasonable and irreversible distortions of initial data. And, finally, a method used to solve the choice problem must satisfy the new principle of invariance of aggregation of individual preferences, which can be considered as a necessary condition for rationality of collective multicriteria choice. According to this principle, the final result should not depend on a sequence of procedures for processing individual multicriteria estimates (first by DMG members, and then by criteria, or vice versa). Methods of group multicriteria choice are in many ways a generalization of methods of individual multicriteria choice, in which options are represented by value functions, vectors or tuples of estimates and, additionally, methods for aggregating individual preferences of DMG members are specified. Now there are very few such methods. Almost all of them, with rare exceptions, only allow ordering options by group multicriteria estimates, which are usually calculated as a sum of individual estimates, weighted or averaged over all experts and all criteria. Group analogues of the analytic hierarchy methods were proposed by T. Saaty and J. Alexander [57], J. Barzilai and F. Lootsma [39]. In the technique for order preference by similarity to ideal solution (TOPSIS, C. Hwang and K. Yoon) [50],

46

2 Individual and Collective Decisions

multi-attribute options, which are represented by vector estimates averaged over all options and experts, are ranked by a degree of farness from the worst reference option that has the lowest estimates upon all criteria. The vast majority of applied methods for group multicriteria choice of objects of different nature implements the so-called quantitative approach based on numerical measurement of indicators characterizing objects [12, 14]. At the same time, despite of apparent simplicity and obviousness, the quantitative approach is not very suitable for working with qualitative characteristics, as it contains a number of methodological defects. We indicate the most important of them. A priori, it is practically impossible to assign quantitative gradations on rating scales and to correspond any numbers or scores to qualitative factors so that they “correctly” and “objectively” express poorly formalized properties of objects and are equally understood by different people. For example, it is rather difficult to numerically evaluate such concepts as scientific novelty of the research, qualification of the project team, scientific significance of the results. In addition, there is no substantive argument in favor of choosing the number and values of the scale gradations for quantitative estimates (for example, rating scales with such gradations as 1, 2, 3; 1, 3, 5, 7, 9; 1, 2, …, 9, 10 or others). Moreover, examples of the choice problems are constructed, showing that the use of different numerical scales for the same criteria can lead to completely different final results that are ranking of the initial collection of objects or dividing objects into classes. Thus, by introducing estimate gradations on rating scales in different ways, it is possible to obtain unequal orderings or classifications of multi-attribute objects. Finally, when using numerical rating scales, there is a temptation to formulate a single “simple” and “understandable” integral indicator aggregating particular estimates, by the value of which options are compared and selected. Such integral indicator is usually represented as a sum, weighted sum or some averaged estimate. In these cases, heterogeneous attributes, important and unimportant factors, estimates of different experts are mixed, which is not always mathematically correct. It becomes impossible to identify the most significant indicators, and possibility of assigning rather high final marks to insignificant options increases. Methods of group verbal decision analysis (GVAR), described in the following chapters, implement the so-called qualitative approach, in which heterogeneity, possible inconsistency and contradiction of non-numeric information are taking into account, incorrect additional transformations of data are not applied [1, 26, 58–62]. These methods make it possible to solve all types of tasks of collective multicriteria choice. In conclusion, we note that many modern methods of optimal and rational choice are realized as interactive human-machine procedures and decision support systems. This is caused by the need for human participation in processes of solving problems and the complexity of the algorithms used to process information. Many of created computer programs and systems implement various methods of multicriteria optimization, analytical hierarchy, outranking relation, verbal analysis, and others that provide interactive procedures for interaction between a person and a computer. As an example, we mention the popular decision support systems using the analytical

References

47

hierarchy methods: Expert Choice, Criterium, REMBRANDT (Ratio Estimation in Magnitudes or deciBels to Rate Alternatives, which are Non-DominaTed).

References 1. Petrovsky, A.B.: Teoriya prinyatiya resheniy (Theory of Decision Making). Publishing Center “Academy”, Moscow (2009). (in Russian) 2. von Neumann, J., Morgenstern, O.: Theory of Games and Economic Behavior. Princeton University Press, Princeton (1944) 3. Fishburn, P.C.: Utility Theory for Decision Making. Wiley, New York (1970) 4. Debreu, G.: The Theory of Value: An Axiomatic Analysis of Economic Equilibrium. Yale University Press, New Haven (1983) 5. Roy, B.: Multicriteria Methodology for Decision Aiding. Kluwer Academic Publishers, Dordrecht (1996) 6. Roy, B., Bouyssou, D.: Aide multicritère à la décision: méthodes et cas. Economica, Paris (1993) 7. Larichev, O.I.: Nauka i iskusstvo prinyatiya resheniy (Science and Art of Decision Making). Nauka, Moscow (1979). (in Russian) 8. Larichev, O.I.: Verbal’niy analiz resheniy (Verbal Decision Analysis). Nauka, Moscow (2006). (in Russian) 9. Larichev, O.I., Moshkovich, H.M.: Verbal Decision Analysis for Unstructured Problems. Kluwer Academic Publishers, Boston (1997) 10. Larichev, O.I., Moshkovich, H.M., Furems, E.M., Mechitov, A.I., Morgoev, V.K.: Knowledge Acquisition for the Construction of Full and Contradiction Free Knowledge Bases. IEC ProGAMMA, Groningen (1991) 11. Larichev, O.I., Olson, D.L.: Multiple Criteria Analysis in Strategic Siting Problems. Kluwer Academic Publishers, Boston (2001) 12. Anderson, D.R., Sweeney, D.J., Williams, T.A.: An Introduction to Management Science: A Quantitative Approach to Decision Making. West Publishing Company, Minneapolis (2001) 13. Steuer, R.: Multiple Criteria Optimization: Theory, Computation and Application. Wiley, New York (1985) 14. Hwang, C.-L., Yoon, K.: Multiple Attribute Decision Making: Methods and Applications. Springer, New York (1981) 15. Lotov, A.V., Bushenkov, V.A., Kamenev, G.K., Chernykh, O.L.: Kompyuter i poisk kompromissa. Metod dostizhimykh tseley (Computer and Search for a Compromise. Method of Achievable Goals). Nauka, Moscow (1997). (in Russian) 16. Lotov, A.V., Bushenkov, V.A., Kamenev, G.K.: Interactive Decision Map: Approximation and Visualization of Pareto Frontier. Kluwer Academic Publishers, Boston (2004) 17. Yu, P.L.: Multiple Criteria Decision Making: Concepts, Techniques, and Extensions. Plenum Press, New York (1985) 18. Kaufmann, A.: Introduction to the Theory of Fuzzy Subsets. Academic Press, New York (1975) 19. Yager, R.R.: On the theory of bags. Int. J. Gen. Syst. 13(1), 23–37 (1986) 20. Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965) 21. Borisov, A.I., Krumberg, O.A., Fedorov, I.P.: Prinyatie resheniy na osnove nechyotkikh modeley (Decision Making Based on Fuzzy Models). Nauka, Moscow (1990). (in Russian) 22. Orlovsky, S.A.: Problemy prinyatiya resheniy pri nechetkoy iskhodnoy informatsii (Decision Making Problems with Fuzzy Initial Information). Nauka, Moscow (1981). (in Russian) 23. Moulin, H.: Axioms of Cooperative Decision Making. Cambridge University Press, Cambridge (1988) 24. Doumpos, M., Zopounidis, C.: Multicriteria Decision aid Classification Methods. Kluwer Academic Publishers, Dordrecht (2002)

48

2 Individual and Collective Decisions

25. Vincke, Ph.: Multicriteria Decision Aid. Wiley, Chichester (1992) 26. Petrovsky, A.B.: Gruppovoy verbal’niy analiz resheniy (Group Verbal Decision Analysis). Nauka, Moscow (2019). (in Russian) 27. Edwards, W.: Utility Theories: Measurements and Applications. Kluwer Academic Publishers, Dordrecht (1992) 28. Keeney, R.L., Raiffa, H.: Decisions with Multiple Objectives: Preferences and value Tradeoffs. Wiley, New York (1976) 29. Raiffa, H.: Decision Analysis: Introductory Lectures on Choices Under Uncertainty. AddisonWesley Publishing Company, Reading (1968) 30. Figueira, J., Greco, S., Ehrgott, M.: Multiple Criteria Decision Analysis, State of the Art Surveys, new revised version. Springer, Berlin (2016) 31. Roubens, M., Vincke, Ph.: Preference Modelling. Springer, Berlin (1985) 32. Kahneman, D., Tversky, A.: Prospect theory: an analysis of decision under risk. Econometrica 47(2), 263–291 (1979) 33. Kahnemann, D., Tversky, A.: Choice, Values and Frames. Cambridge University Press, Cambridge (2000) 34. Aleskerov, F.: Arrovian Aggregation Models. Kluwer Academic Publishers, Dordrecht (1999) 35. Aizerman, M.A., Aleskerov, F.T.: Theory of Choice. Elsevier, North-Holland (1995) 36. Noghin, V.D.: Prinyatie resheniy v mnogokriterial’noy srede: kolichestvenniy podkhod (Decision Making in Multicriteria Environment: Quantitative Approach). Fizmatlit, Moscow (2005). (in Russian) 37. Noghin, V.D.: Reduction of the Pareto Set: An Axiomatic Approach. Springer International Publishing AG, Cham (2018) 38. Saaty, T.: Multicriteria Decision Making. The Analytic Hierarchy Process. RWS Publications, Pittsburgh (1990) 39. Barzilai, J., Lootsma, F.A.: Power relations and group aggregation in the multiplicative AHP and SMART. J. Multi-Criteria Decis. Anal. 6(3), 155–165 (1997) 40. Zadeh, L.A.: From computing with numbers to computing with words—from manipulation of measurements to manipulation of perceptions. IEEE Trans. Circ. Syst. 45(1), 105–119 (1999) 41. Zadeh, L.A., Kacprzyk, J.: Computing with Words in Information/Intelligent Systems. Vol. 1. Foundations, Vol. 2. Applications. Springer, Berlin; Physica-Verlag, Heidelberg and New York (1999) 42. Zimmermann, H.J., Zadeh, L.A., Gaines, B.R.: Fuzzy Sets and Decision Analysis. NorthHolland, Amsterdam and New York (1984) 43. Kacprzyk, J.: Group decision making with a fuzzy majority. Fuzzy Sets Syst. 18(2), 105–118 (1986) 44. Kacprzyk, J., Fedrizzi, M.: A ‘human-consistent’ degree of consensus based on fuzzy logic with linguistic quantifiers. Math. Soc. Sci. 18(3), 275–290 (1989) 45. Kacprzyk, J., Zadro˙zny, S.: Computing with words in decision making through individual and collective linguistic choice rules. Int. J. Uncertainity Fuzziness Knowl. Based Syst. 9(1), 89–102 (2001) 46. Köksalan, M., Ulu, C.: An interactive approach for placing alternatives in preference classes. Eur. J. Oper. Res. 144(2), 429–439 (2003) 47. Roubens, M.: Ordinal multiattribute sorting and ordering in the presence of interacting points of view. In: Aiding Decisions with Multiple Criteria: Essays in Honor of Bernard Roy, pp. 229– 246. Kluwer Academic Publishers, Dordrecht (2001) 48. Greco, S., Matarazzo, B., Slowinski, R.: Rough sets methodology for sorting problems in presence of multiple attributes and criteria. Eur. J. Oper. Res. 138(2), 247–259 (2002) 49. Furems, E.M.: A general approach to multiattribute classification problems based on verbal decision analysis. Sci. Tech. Inf. Process. 47(5), 304–313 (2020) 50. Hwang, Ch.-L., Lin, M.-J.: Group Decision Making Under Multiple Criteria. Methods and Applications. Springer, Berlin (1987) 51. Arrow, K.J.: Social Choice and Individual Values. Wiley, New York (1951) 52. Sen, A.: Choice, Welfare and Measurement. Harvard University Press, Cambridge (1997)

References

49

53. Levchenkov, V.S.: Algebraicheskiy podkhod v teorii vybora (Algebraic Approach to Choice Theory). Nauka, Moscow (1990). (in Russian) 54. Levchenkov, V.S.: Dva printsipa ratsional’nosti v teorii vybora: Borda protiv Kondorse (Two Principles of Rationality in Choice Theory: Borda vs. Condorcet). Publishing Department, Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow (2002). (in Russian) 55. Harsanyi, J.C.: Rational Behavior and Bargaining Equilibrium in Games and Social Situations. Cambridge University Press, Cambridge (1976) 56. Litvak, B.G.: Ekspertnye otsenki i prinyatie resheniy (Expert Estimates and Decision-Making). Patent, Moscow (1996). (in Russian) 57. Saaty, T.L., Alexander, J.M.: Conflict Resolution: the Analytic Hierarchy Approach. Praeger Publishers, New York (1989) 58. Petrovsky, A.B.: Mnogokriterial’noe prinyatie resheniy po protivorechivym dannym: podkhod teorii mul’timnozhestv (Multicriteria decision making on contradictory data: an approach of multiset theory). Informatsionnye tekhnologii i vychislitel’nye sistemy (Inf. Technol. Comput. Syst.) 2, 56–66 (2004). (in Russian) 59. Petrovsky, A.B.: Inconsistent preferences in verbal decision analysis. In: Papers from IFIP WG8.3 International Conference on Creativity and Innovation in Decision Making and Decision Support, vol. 2, pp. 773–789. Ludic Publishing Ltd., London (2006) 60. Petrovsky, A.B.: Multiple criteria decision making: discordant preferences and problem description. J. Syst. Sci. Syst. Eng. 16(1), 22–33 (2007) 61. Petrovsky, A.B.: Group verbal decision analysis. In: Encyclopedia of Decision Making and Decision Support Technologies, vol. 1, pp. 418–425. IGI Global, Hershey (2008) 62. Petrovsky, A.B.: Group multiple criteria decision making: multiset approach. In: Recent Developments and New Directions in Soft Computing. Studies in Fuzziness and Soft-Computing, vol. 317, pp. 19–33. Switzerland Springer International Publishing (2014)

Chapter 3

Group Ordering of Multi-attribute Objects

This chapter introduces new ways for representation and comparison of objects with many numerical and/or verbal attributes, which are present in several distinguished versions. Specifying multi-attribute objects with multisets allows us to solve all types of tasks of individual and collective choice. We describe original methods of group verbal analysis for ranking multi-attribute objects. These are techniques for group multicriteria ordering of objects by pairwise comparisons and by proximity to reference points. We demonstrate examples how to solve model tasks of collective multicriteria choice with the developed methods.

3.1 Representation and Comparison of Multi-attribute Objects Let us discuss possible ways for representing, comparing and grouping objects (options, alternatives), which are defined by many quantitative and qualitative attributes and are presented in several copies (versions, exemplars) that distinguish in values of characteristics [1–9]. Firstly, we consider the situation when objects O1 , …, Om exist in single copies and are characterized by attributes K 1 , …, K n with numerical and/or verbal rating scales.( Traditionally, ) one associates eeach object Oi , i = 1, …, m to a vector or tuple e e x i = xi1 , . . . , xin , a component xil = K l (Oi ) of which is a value of the attribute K l equal to xie , e = 1, …, h if all attributes K 1 , …, K n have one and the same rating scale X = {x 1 , …, x{h }, or equal }to xilel , el = 1, …, hl if each attribute K l has its own

rating scale X l = xl1 , . . . , xlhl , l = 1, …, n. A vector/tuple xi is a point of the n-dimensional space X 1 × … × X n formed by scales of the attributes K 1 ,…, K n . The situation becomes more complicated when one and the same multi-attribute object Oi is present in several versions Oi , s = 1, …, t, which differ in values of attributes K 1 , …, K n . Different versions of the object Oi appear, for example, when © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. B. Petrovsky, Group Verbal Decision Analysis, Studies in Systems, Decision and Control 451, https://doi.org/10.1007/978-3-031-16941-0_3

51

52

3 Group Ordering of Multi-attribute Objects

the object is independently evaluated by several experts upon many criteria, or the characteristics of the object are measured under different conditions or in different ways. In such { cases, the object } Oi does not correspond to a single vector/tuple, but a collection x i , . . . , x i of t vectors/tuples, where xi = (x i1 , …, x in ) describes one of versions Oi of the object Oi . In the n-dimensional attribute space not as a single X 1 × … × X n , the object Oi is now represented { } point xi , but as a whole group (“cloud”) consisting of t points x i , . . . , x i . Objects O1 , …, Om , each of which exists as a single Oi or in several copies Oi , and their attributes defined by vectors/tuples xi , can be represented by the matrices ‘Object-Attributes’ F or F < > where F < > has a dimension greater than F. Rows of the matrix F = ||x il ||m×n correspond to objects. Columns correspond to attributes. Elements x il are values xilel of components of vectors/tuples that specify the objects. Rows of the matrix F< > = ||xil ||tm×n correspond to object versions. Columns correspond to attributes. Elements xil are values xil of components of vectors/tuples that specify the object versions.{ } It is important to note that a collection x i , . . . , x i of vectors/ tuples x i , . . . , x i , representing the object Oi , should be considered on the whole. Moreover, generally speaking, values of the same attributes describing different versions Oi of the object Oi (estimates of different experts, characteristics measured by different methods or tools) can be similar, different, or even contradictory. This, in turn, can lead to incomparability of several vectors/tuples xi , that represent one and the same object Oi . Thus, a collection of multi-attribute objects O1 , …, Om , each of which corresponds to its own “cloud” consisting of t different points in the attribute space X 1 × … × X n , is quite difficult to analyze. Therefore, it is highly desirable to simplify a description and aggregated presentation of such objects. In the case of numerical attributes K 1 , …, K ) n , it is easiest to represent each object ( con , . . . , x Oi with a single vector xicon = xicon 1 in , components of which are determined by additional formal conditions or substantial considerations. For example, it could be a vector that is a center of group; a vector closest to all vectors of group; a vector with total, averaged, or weighted values of components of vectors x i , . . . , x i describing versions Oi of the object Oi . However, when rating scales of attributes have discrete numerical estimates, the “averaged” vector may not physically exist in the original n-dimensional attribute space X 1 × … × X n , since there are no corresponding numerical gradations on scales. In order to be able to operate with such vectors, one must either expand the initial rating scales by introducing intermediate numerical gradations, or consider continuous rating scales. Both the first and the second, strictly speaking, change the initial formulation of the choice problem. In the case of symbolic, verbal, or mixed attributes K 1 , …, K n , a group of tuples, representing versions of any object, cannot, even in principle, be replaced by a single tuple with total, averaged, weighted, mixed values of components, since such mathematical operations on such variables are not feasible. Similar and other analogous difficulties can be overcome by using formalism of multiset theory. A multiset or a set with repetitions is a convenient mathematical

3.1 Representation and Comparison of Multi-attribute Objects

53

model to represent objects that are described by many numerical and verbal attributes. This model allows taking into account simultaneously heterogeneous attributes, possible combinations of attribute values, and presence of different versions of objects. Show how multi-attributes objects can be defined with multisets [1, 7–10]. Let objects O1 , …, Om exist in single copies and are defined by attributes K 1 , …, K n with numerical and/or verbal rating scales. If each of attributes K 1 , …, K n has the same scale X = {x 1 , …, x h }, then we associate the object Oi , i = 1, …, m with a multiset of estimates { ( ) ( ) } Ai = k Ai x 1 ◦ x 1 , . . . , k Ai x h ◦ x h

(3.1)

over the generating set X = {x 1 , …, x h } of scale gradations. Here, a value k Ai (x e ) of the multiplicity function shows how many times an estimate x e , e = 1, …, h is present in description of the object Oi . } { If each attribute K l has its own rating scale X l = xl1 , . . . , xlhl , l = 1, …, n, we introduce of attributes—the set X = X 1 ∪ . . . ∪ } { a single general scale (hyperscale) h1 1 1 hn X n = x1 , . . . , x1 , . . . ; xn , . . . , xn , which consists of n groups of attributes and combines all estimate gradations on the scales of all attributes. Then the object Oi corresponds to a multiset of estimates ( ) } { ( ) ( ) ( ) Ai = k Ai x11 ◦ x11 , . . . , k Ai x1h 1 ◦ x1h 1 , . . . ; k Ai xn1 ◦ xn1 , . . . , k Ai xnh n ◦ xnh n (3.2) } { over the generating set X = x11 , . . . , x1h 1 , . . . ; xn1 , . . . , xnh n of scale gradations. ( ) Here, a value k Ai xlel of the multiplicity function shows how many times an estimate xlel ∈ X l , el = 1, …, hl upon the attribute K l is present in description of the object Oi . { The expression (3.2) is easy }to rewrite into the “usual” form (3.1), if, in the set X = x11 , . . . , x1h 1 ; . . . ; xn1 , . . . , xnh n , change variables as follows: x11 = x 1 , . . . , x1h 1 = x h 1 , x21 = x h 1 +1 , . . . , x2h 2 = x h 1 +h 2 , . . . , xnh n = x h , h = h 1 + . . . + h n . Despite the seemingly cumbersome presentation of multi-attribute objects by multisets, such recording forms are very convenient for operations under objects, since calculations are performed parallel and simultaneously for all elements of all multisets. Now, let each of objects O1 ,…, Om be present in several copies Oi , i = 1, …, m, s = 1, …, t, which differ in values of attributes K 1 , …, K n with numerical and/or verbal rating scales. When there are several versions of any object Oi , this object is represented by a group of all its copies Oi . In many methods, groups of objects are formed using the operations of addition of vectors or union of sets that describe objects. Variety of operations on multisets provides an ability to combine multi-attribute objects in different ways. A group Df of objects can be formed by defining a multiset

54

3 Group Ordering of Multi-attribute Objects

∑ ∑ C f that represents this group by the sum C f = Ai , k C f (x e ) = k Ai (x e ), i i ∪ ∩ Ai , k C f (x e ) = maxi k Ai (x e ), intersection C f = Ai , k C f (x e ) = union C f = i

i

mini k Ai (x e ) of multisets Ai that describe the grouping objects,∑ or by one of the bi • Ai , C f = linear combinations of operations on multisets Ai : C f = i ∩ ∪ bi • Ai , C f = bi • Ai , bi > 0 is an integer. When we add multisets, all propi

i

erties (all values of all attributes) of individual objects in the group, are aggregated. When we combine or intersect multisets, the best properties (maximum values of all attributes) or, correspondingly, the worst properties (minimum values of all attributes) of the grouping objects are strengthened. We associate a multi-attribute object Oi with a multiset into the form (3.1, 3.2) { ( ) ( ) } Ai = k Ai x 1 ◦ x 1 , . . . , k Ai x h ◦ x h , and a version Oi with a multiset ( 1) ( h) } { x ◦ x 1 , . . . , k x ◦ xh Ai = k Ai Ai } { over the set X = {x 1 , …, x h } or X = X 1 ∪. . .∪ X n = x11 , . . . , x1h1 ; . . . ; xn1 , . . . , xnhn of estimates. We form a multiset Ai as a weighted sum of multisets describing versions of the object:



Ai = c Ai + . . . + c Ai , where multiplicity function of the multiset Ai is calculated by a rule k Ai (x e ) = ∑ the

c k Ai (x e ), and a coefficient c characterizes a significance (expert competence, s

measurement accuracy) of copy Oi . Objects O1 , …, Om , each of which exists in a single Oi or several Oi versions, and their attributes that are defined by multisets A1 , …, Am of the form (3.1, 3.2), can be represented by the matrices ‘Object-Attributes’ || G, H||or G< > , H< > . Rows || ||of the || || || || , H = ||kil || , matrices G = ||kie ||m×h , H = ||kil ||m×h , G = ||kie || tm×h

tm×h

h = h1 + … + hn correspond to objects or object versions. Columns( corre) spond to gradations of rating scales. Elements are multiplicities kAi (x e ), k Ai xlel or )

( k Ai (x e ), k Ai xlel of elements of the multisets Ai , Ai , which respectively describe objects Oi themselves or versions Oi of these objects. Difference and similarity of objects are characterized by their closeness in any attribute space [11, 12]. When comparing objects Oi , Oj , i, j = 1, …, m, presented with multisets, it is convenient to consider them as points in the metric space of 1 h multisets (A,d), A = { A1 , …, { Am }, d is a metric given on}the set X = {x , …, x } h1 1 or set X = X 1 ∪ . . . ∪ X n = x1 , . . . , x1 , . . . ; xn1 , . . . , xnh n of estimates. We define a difference d(Oi , Oj ) between the objects Oi , Oj by one of the Petrovsky metrics (9.47–9.49) or symmetric (9.50), which we rewrite into the following form:

3.1 Representation and Comparison of Multi-attribute Objects

55

( ) [ ]1/ p d Z1 p Oi , O j = Si j ;

(3.3)

( ) [ ]1/ p d Z2 p Oi , O j = Si j /W ;

(3.4)

( ) [ ]1/ p d Z3 p Oi , O j = Si j /Mi j ;

(3.5)

( ) [ ]1/ p d Z4 p Oi , O j = Si j /Ni j .

(3.6)

Here Si j =

h ∑

hl n ∑ | ( e) | ( ) ( e )| ∑ ( )| | | we k Ai x − k A j x = wl |k Ai x el − k A j x el |, l

Li j =

h ∑

hl n ∑ ( )] ∑ ( )] [ ( ) [ ( ) we min k Ai x e , k A j x e = wl min k Ai xlel , k A j xlel , l=1 el =1

e=1

Mi j =

h ∑

hl n ∑ [ ] ∑ we max k Ai (x e ), k A j (x e ) = wl max[k Ai (xlel ), k A j (xlel )], l=1 el =1

e=1

Ni j =

l

l=1 el =1

e=1

h ∑

hl n ∑ [ ( ) ( )] ∑ [ ( ) ( )] we k Ai x e + k A j x e = wl k Ai xl el + k A j xlel , l=1 el =1

e=1

W =

h ∑ e=1

hl n ∑ ( ) ∑ ( ) we max k A x e = wl max k A xlel , A∈A

l=1 el =1

A∈A

p ≥ 1 is an integer, we , wl > 0 is significance (weight) of the element x e , xlel or, which is the same, of the attribute K l , l = 1,…, n. The relations S ij = M ij − L ij , N ij = M ij + L ij hold for L ij , M ij , N ij , S ij . In practical choice tasks, a difference d(Oi , Oj ) between the objects Oi , Oj is often evaluated by the metric (3.3–3.5) or symmetric (3.6) for p = 1: ( ) ( ) d11 Oi , O j = Si j ; d21 Oi , O j = Si j / W ;

(3.7)

( ) ( ) d31 Oi , O j = Si j /Mi j ; d41 Oi , O j = Si j /Ni j . The expressions d 11 and d 31 generalize for multisets (when we = 1) the well-known types of distances between sets. Namely, d 11 is an analogue of the Fréchet–Nikodym– Aronshayan distance, and d 31 is an analogue of the Steinhaus distance [13–16]. In the multiset metric space, a similarity s(Oi , Oj ) between the objects Oi , Oj is specified by one of the following indicators:

56

3 Group Ordering of Multi-attribute Objects

( ) s0 O i , O j = L i j ;

(3.8)

( ) s1 Oi , O j = L i j /W ;

(3.9)

( ) ( ) ( ) s2 Oi , O j = 1 − d21 Oi , O j = 1 − Si j /W ;

(3.10)

( ) ( ) s3 Oi , O j = 1 − d31 Oi , O j = L i j /Mi j ;

(3.11)

( ) ( ) s4 Oi , O j = 1 − d41 Oi , O j = 2L i j /Ni j .

(3.12)

The expressions s0 , s1 , s2 , s3 , s4 generalize for multisets the well-known nonmetric indicators (indexes) of object similarity introduced for sets. Respectively, s0 (3.8) is an analogue of the measure of absolute similarity, s1 (3.9) is an analogue of the Russel–Rao measure of similarity, s2 (3.10) is an analogue of the simple matching coefficient, s3 (3.11) is an analogue of the Jaccard coefficient or Tanimoto measure, s4 (3.12) is an analogue of the Sørensen–Oosting index [9, 10, 12, 17–20]. Note that the similarity indexes s1 and s2 are tied with the relation ) ( ) ( ) ( s2 Ai , A j = s1 Ai , A j + s1 Ai , A j , that follows from the expression for cardinality of decomposition of the maximum multiset Z into two blocks, which are coverings and overlappings formed by the multisets Ai , Aj and their complements [21–23]. Representation of multi-attribute objects, using the tools of multiset theory, makes it possible to solve all types of tasks of individual and collective choice and greatly facilitates obtaining of the results.

3.2 Demonstrative Example: Multi-attribute Objects We give a demonstrative example of representation of multi-attribute objects. There are ten objects O1 , …, O10 , which are described by eight attributes K 1 , …, K 8 with five-point rating scales X = {x 1 , x 2 , x 3 , x 4 , x 5 } of estimates. For instance, let objects be pupils, and attributes be annual marks in the studied subjects: K 1 Mathematics, K 2 Physics, K 3 Chemistry, K 4 Biology, K 5 Geography, K 6 History, K 7 Literature, K 8 Foreign language. Graduations of rating scales can be numerical or verbal and mean: x 1 −1/very bad, x 2 −2/bad, x 3 −3/satisfactory, x 4 −4/good, x 5 −5/excellent. Or objects are questions from a questionnaire for studying public opinion on a certain problem. Then object attributes are answers of respondents K 1 , …, K 8 , which are encoded as follows: x 1 −1/completely disagree, x 2 −2/disagree, x 3 −3/neutral, x 4 −4/agree, x 5 −5/completely agree.

3.2 Demonstrative Example: Multi-attribute Objects

57

Answers of university students, who evaluated a course of lectures indicating numerical estimates, are characterized by the matrix F (Table 3.1), which is taken from the book [24]. The same answers of students or annual marks of pupils, recorded as multisets of numerical or verbal estimates, are presented by the matrix G (Table 3.2). For example, in Table 3.1, annual marks of the pupil O1 are defined by the vector x 1 = (4, 5, 4, 5, 4, 5, 4, 5) of estimates, and, in Table 3.2, by the multiset of estimates into the form (3.1): } { A1 = 0 ◦ x 1 , 0 ◦ x 2 , 0 ◦ x 3 , 4 ◦ x 4 , 4 ◦ x 5 . This recording form shows that, for a year, the pupil O1 had four marks x 4 meaning ‘4/good’, four marks x 5 meaning ‘5/excellent’. The pupil O1 did not receive other marks. Sometimes a multiset is written as A1 = {4zx 5 , 4zx 4 , 0zx 3 , 0zx 2 , 0zx 1 }, Table 3.1 F. ‘Objects–Attributes’ O\K

K1

K2

K3

K4

K5

K6

K7

K8

x1

4

5

4

5

4

5

4

5

x2

4

1

2

1

3

2

2

2

x3

1

1

3

1

4

1

1

4

x4

5

3

2

4

4

5

4

5

x5

4

4

4

4

4

5

4

4

x6

5

5

4

4

4

5

5

4

x7

4

1

2

3

3

3

1

2

x8

4

5

4

2

3

4

5

3

x9

3

2

3

1

3

3

2

2

x10

5

5

4

5

3

5

5

4

Table 3.2 G. ‘Objects–Attributes’ O\X

x1

x2

x3

x4

x5

A1

0

0

0

4

4

A2

2

4

1

1

0

A3

5

0

1

2

0

A4

0

1

1

3

3

A5

0

0

0

7

1

A6

0

0

0

4

4

A7

2

2

3

1

0

A8

0

1

2

3

2

A9

1

3

4

0

0

A10

0

0

1

2

5

58

3 Group Ordering of Multi-attribute Objects

arranging its elements in the reverse order (from the best to the worst). But, as a rule, elements of a multiset are considered to be disordered. If the object O1 is{ characterized by }the attribute K 1 , …, K n , which have their own rating scales X l = xl1 , xl2 , xl3 , xl4 , xl5 , l = 1, …, 8, then this object is represented by a multiset of estimates into the form (3.2): { A1 = 0 ◦ x11 , 0 ◦ x12 , 0 ◦ x13 , 1 ◦ x14 , 0 ◦ x15 ; 0 ◦ x21 , 0 ◦ x22 , 0 ◦ x23 , 0 ◦ x24 , 1 ◦ x25 ; 0 ◦ x31 , 0 ◦ x32 , 0 ◦ x33 , 1 ◦ x34 , 0 ◦ x35 ; 0 ◦ x41 , 0 ◦ x42 , 0 ◦ x43 , 0 ◦ x44 , 1 ◦ x45 ; 0 ◦ x51 , 0 ◦ x52 , 0 ◦ x53 , 1 ◦ x54 , 0 ◦ x55 ; 0 ◦ x61 , 0 ◦ x62 , 0 ◦ x63 , 0 ◦ x64 , 1 ◦ x65 ; } 0 ◦ x71 , 0 ◦ x72 , 0 ◦ x73 , 1 ◦ x74 , 0 ◦ x75 ; 0 ◦ x81 , 0 ◦ x82 , 0 ◦ x83 , 0 ◦ x84 , 1 ◦ x85 . It follows from such recording form that, for a year, the pupil O1 received the mark xl4 ‘4/good’ in mathematics, chemistry, geography, a foreign language, and the mark xl5 ‘5/excellent’ in physics, biology, history, literature. There are no other marks. Let us now consider the situation when each object is present in several versions (copies) that distinguish from each other. For example, pupils O1 , …, O10 received marks in the same eight studied subjects K 1 , …, K 8 per each half-year (semester), that is, twice per year. Or eight respondents K 1 , …, K 8 , answering the same questions O1 , …, O10 , were interviewed twice. Thus, each object will be defined not by one, but by two vectors/tuples of attributes, or by two multisets. Such description of the object version is an expression of the individual opinion of a certain expert, and a description of the object on the whole is an aggregated collective judgment of two experts. Different versions of objects are given by the matrices F< > and G< > , respectively (Tables 3.3 and 3.4). The object O1 is represented in Table 3.3 by two vectors/tuples x 1 = (4, 5, 4, 5, 4, 5, 4, 5) and x1 = (5, 5, 5, 5, 4, 4, 4, 5). Aggregated estimates of the object O1 , when they are averaged, should be described by the vector x av 1 = (4.5, 5.0, 4.5, 5.0, 4.0, 4.5, 4.0, 5.0) of “average” estimates, although non-integer numbers are absent in the accepted five-point scale X = {1, 2, 3, 4, 5}. Two versions of the object O1 are represented in the matrix G< > (Table 3.4) by = {0zx 1 , 0zx 2 , 0zx 3 , 4zx 4 , 4zx 5 } and multisets of estimates of the form (3.1) A 1

1 2 3 4 5 A1 = {0zx , 0zx , 0zx , 3zx , 5zx }. We consider semi-annual estimates to be equally significant, and respondents to be equally competent: c = c = 1. Then the object O1 on the whole is described by a multiset { } A1 = A + A = 0 ◦ x 1, 0 ◦ x 2, 0 ◦ x 3, 4 ◦ x 4, 4 ◦ x 5 1 1 } { } { + 0 ◦ x 1, 0 ◦ x 2, 0 ◦ x 3, 3 ◦ x 4, 5 ◦ x 5 = 0 ◦ x 1, 0 ◦ x 2, 0 ◦ x 3, 7 ◦ x 4, 9 ◦ x 5 . We can see from this recording form that, for a year, the pupil O1 received seven marks x 4 ‘4/good’, nine marks x 5 ‘5/excellent’, and no other marks.

3.2 Demonstrative Example: Multi-attribute Objects

59

Table 3.3 Matrix F< > ‘Objects–Attributes’ O\K

K1

K2

K3

K4

K5

K6

K7

K8

x1 x1

4 5

5 5

4 5

5 5

4 4

5 4

4 4

5 5

x2 x2

4 3

1 2

2 1

1 1

3 4

2 3

2 3

2 2

x3 x3

1 1

1 2

3 3

1 1

4 5

1 2

1 1

4 3

x4 x4

5 4

3 4

2 3

4 5

4 4

5 5

4 3

5 4

x5 x5

4 5

4 5

4 3

4 4

4 4

5 4

4 5

4 4

x6 x6

5 4

5 5

4 4

4 4

4 4

5 4

5 5

4 5

x7 x7

4 3

1 2

2 1

3 4

3 2

3 4

1 2

2 3

x8 x8

4 5

5 4

4 5

2 3

3 4

4 5

5 4

3 4

x9 x9

3 4

2 3

3 2

1 2

3 2

3 3

2 3

2 2

x10 x10

5 3

5 4

4 3

5 4

3 2

5 4

5 2

4 4

Table 3.4 Matrix G ‘Objects–Attributes’ O\X

x1

x2

x3

x4

x5

A1 A1

0 0

0 0

0 0

4 3

4 5

A2 A2

2 2

4 2

1 3

1 1

0 0

A3 A3

5 3

0 2

1 2

2 0

0 1

A4 A4

0 0

1 0

1 2

3 4

3 2

A5 A5

0 0

0 0

0 1

7 4

1 3

A6 A6

0 0

0 0

0 0

4 5

4 3

A7 A7

2 1

2 3

3 2

1 2

0 0

A8 A8

0 0

1 0

2 1

3 4

2 3

A9 A9

1 0

3 4

4 3

0 1

0 0

A10 A10

0 0

0 2

1 2

2 4

5 0

60

3 Group Ordering of Multi-attribute Objects

Table 3.5 Matrix H ‘Objects–Attributes’ O\X x 1 1 x 1 2 x 1 3 x 1 4 x 1 5 x 2 1 x 2 2 x 2 3 x 2 4 x 2 5 x 1 1 x 3 2 x 3 3 x 3 4 x 3 5 x 4 1 x 4 2 x 4 3 x 4 4 x 4 5 A1

0

0

0

1

1

0

0

0

0

2

0

0

0

1

1

0

0

0

0

2

A2

0

0

1

1

0

1

1

0

0

0

1

1

0

0

0

2

0

0

0

0

A3

2

0

0

0

0

1

1

0

0

0

0

0

2

0

0

2

0

0

0

0

A4

0

0

0

1

1

0

0

1

1

0

0

1

1

0

0

0

0

0

1

1

A5

0

0

0

1

1

0

0

0

1

1

0

0

1

1

0

0

0

0

2

0

A6

0

0

0

1

1

0

0

0

0

2

0

0

0

2

0

0

0

0

2

0

A7

0

0

1

1

0

1

1

0

0

0

1

1

0

0

0

0

0

1

1

0

A8

0

0

0

1

1

0

0

0

1

1

0

0

0

1

1

0

1

1

0

0

A9

0

0

1

1

0

0

1

1

0

0

0

1

1

0

0

1

1

0

0

0

A10

0

0

1

0

1

0

0

0

1

1

0

0

1

1

0

0

0

0

1

1

O\X x 5 1 x 5 2 x 5 3 x 5 4 x 5 5 x 6 1 x 6 2 x 6 3 x 6 4 x 6 5 x 7 1 x 7 2 x 7 3 x 7 4 x 7 5 x 8 1 x 8 2 x 8 3 x 8 4 x 8 5 A1

0

0

0

2

0

0

0

0

1

1

0

0

0

2

0

0

0

0

0

2

A2

0

0

1

1

0

0

1

1

0

0

0

1

1

0

0

0

2

0

0

0

A3

0

0

0

1

1

1

1

0

0

0

2

0

0

0

0

0

0

1

1

0

A4

0

0

0

2

0

0

0

0

0

2

0

0

1

1

0

0

0

0

1

1

A5

0

0

0

2

0

0

0

0

1

1

0

0

0

1

1

0

0

0

2

0

A6

0

0

0

2

0

0

0

0

1

1

0

0

0

0

2

0

0

0

1

1

A7

0

1

1

0

0

0

0

1

1

0

1

1

0

0

0

0

1

1

0

0

A8

0

0

1

1

0

0

0

0

1

1

0

0

0

1

1

0

0

1

1

0

A9

0

1

1

0

0

0

0

2

0

0

0

1

1

0

0

0

0

2

0

0

A10

0

1

1

0

0

0

0

0

1

1

0

1

0

0

1

0

0

0

2

0

Attributes of objects O1 , …, O10 on the whole defined by multisets into the form (3.2) are given in the matrix H (Table 3.5). For example, the object O1 corresponds to a multiset: { A1 = 0 ◦ x11 , 0 ◦ x12 , 0 ◦ x13 , 1 ◦ x14 , 1 ◦ x15 ; 0 ◦ x21 , 0 ◦ x22 , 0 ◦ x23 , 0 ◦ x24 , 2 ◦ x25 ; 0 ◦ x31 , 0 ◦ x32 , 0 ◦ x33 , 1 ◦ x34 , 1 ◦ x35 ; 0 ◦ x41 , 0 ◦ x42 , 0 ◦ x43 , 0 ◦ x44 , 2 ◦ x45 ; 0 ◦ x51 , 0 ◦ x52 , 0 ◦ x53 , 2 ◦ x54 , 0 ◦ x55 ; 0 ◦ x61 , 0 ◦ x62 , 0 ◦ x63 , 1 ◦ x64 , 1 ◦ x65 ; } 0 ◦ x71 , 0 ◦ x72 , 0 ◦ x73 , 2 ◦ x74 , 0 ◦ x75 ; 0 ◦ x81 , 0 ◦ x82 , 0 ◦ x83 , 0 ◦ x84 , 2 ◦ x85 . This recording form shows that, for two semesters, the pupil O1 received one mark xl4 ‘4/good’ and one mark x l 5 ‘5/excellent’ in mathematics, chemistry, history, and two marks xl5 ‘5/excellent’ in physics, biology, geography, literature, a foreign language. The pupil O1 did not receive other maeks. The matrix H< > , analogous to the matrix H in Table 3.5, includes multiplicities of elements of the multisets Ai , which describe versions Oi of objects. This matrix is not given due to the large size. Further we shall describe new methods of group verbal decision analysis that allow us to order and classify objects (options, alternatives), which are present in

3.3 Group Ordering of Objects by Multicriteria …

61

several different versions (copies, exemplars), and their features are characterized by many quantitative (numerical) and/or qualitative (symbolic, verbal) attributes.

3.3 Group Ordering of Objects by Multicriteria Pairwise Comparisons: Method RAMPA In the task of collective ordering, it is required to rank m multi-attribute objects O1 , …, Om , which are evaluated by t decision makers/experts upon n criteria K 1 , …, K n with numerical and/or verbal rating scales X l = {xl1 , . . . , xlhl }, l = 1, …, n. Group ranking of objects expresses collective preference/knowledge of members of a decision making group (DMG), which represents individual preferences/knowledge of actors into the aggregated form. The method for ranking by aggregated multicriteria pairwise comparisons of alternatives (RAMPA) is intended for collective ordering of objects [7, 9, 25]. The method is based on aggregation of individual preferences of several experts, which are expressed using matrices of objects’ pairwise comparisons upon many criteria without explicit description of objects by their attributes. Consistency of expert judgments is verified and evaluated at all stages of forming a collective decision. The method allows us to get a group ranking of multi-attribute objects without preconstruction of individual rankings and includes the following steps. 10 . Each expert s, s = 1, …, t compares all objects O1 , …, Om in pairs with

each || upon each criterion K l , l = 1, …, n separately. Form nt matrices B[l] = || other || || of individual pairwise comparisons of objects by the expert s upon ||b[l]i j || m×m

criteria K 1 , …, K n , elements of which b[l]i j = 2 if Oi >Oj ; b[l]i j = 1 if Oi ≈ Oj or

comparison of objects Oi and Oj is difficult for the expert; b[l]i j = 0 if Oi ≺ Oj . 20 . Verify the matrices B[l] of individual pairwise comparisons of objects on transitivity. Presence of nontransitive triads of estimates indicates an inconsistency of expert judgments. This expert is re-interviewed, or his/her assessments are excluded from further consideration. 30 . Construct an individual ranking R l of objects by preference for each expert s upon each criterion K l . Objects O1 , …, Om are ordered in descending order of the m ∑

= b[l]i of individual pairwise row sums b[l]i j of elements for each matrix B[l] j=1

comparisons. 40 . Verify the null hypothesis on consistency of individual expert preferences. For the expert s and for each criterion K l , proximity of individual rankings R l of objects is evaluated using the Kendall coefficient ω of concordance. This coefficient ( t )2 m ∑ ∑ 2 3 av ri − r , is calculated by the formula ω = 12ζ /t (m − m), where ζ = i=1 s=1 ( ) m t ∑ ∑ r i is a rank assigned by the expert s to the object Oi , r av = (1/m) ri = i=1

s=1

62

3 Group Ordering of Multi-attribute Objects

t (m + 1)/2 is a rank averaged by experts, m is the total number of objects, t is the total number of experts. If there are connected ranks of objects [ ] in the t ( ) ∑ τ , where ranking R l , then the coefficient ω = 12ζ / t 2 m 3 − m − t s=1 [ ] )3 Ns ( ∑

nk τ = − n k , nk is a number of matching ranks with multiplicity k=1

n, k is a serial number of a group of connected ranks, N s is a number of groups of connected ranks. If there are no connected ranks of objects in the ranking R l , then k = 0, nk = 0, and both formulas for ω coincide. The Kendall concordance coefficient ω determines a ratio for estimation of variance of a random variable to its maximum value: ω = 1 for completely agreed opinions of experts, ω = 0 for completely non-agreed opinions. A value of ω is compared with the Student χ 2 -distribution that has m−1 degrees of freedom. If the 2 (m − 1) holds for the selected significance level α, for inequality ωt (m − 1) ≥ χ1−α example, α = 0.05, then the concordance coefficient ω is significantly different from zero, and therefore, individual judgments of experts upon the criterion K l differ few. 50 . When experts have different competencies and/or influences, introduce a competency indicator c of the expert s, for example, based on results of a mutual evaluation of experts or an opinion of a super-decision maker [7]. Indicators of expert t ∑ competencies can be normalized as follows: c = 1. s=1

60 . Construct a collective ranking R l gr of objects by preference for each particular criterion K l . Objects O1 , …, Om are ordered in descending order of the row sums t ∑

b[l]i = c b[l]i of elements for the matrix B[l] of group pairwise comparisons s=1

of objects. The matrix B[l] is a weighted sum B[l] = c B[l] + … + c B[l] of the matrices B[l] of individual pairwise comparisons. 70 . Verify the null hypothesis on consistency of group expert preferences. Proximity of collective rankings Rl gr of objects for each criterion K l is evaluated similarly to step 40 . 80 . When criteria K 1 , …, K n have different importances (weights) for a superdecision maker and/or experts, determine an importance wl of the criterion K l , for example, using one of the procedures [7, 26]. Weights of criteria can be normalized n ∑ wl = 1. as follows: l=1

90 . Construct a general collective ranking R P gr of objects by preference. Objects n ∑ wl b[l]i of O1 , …, Om are ordered in descending order of the row sums bi = l=1

elements for the resulting matrix B of group pairwise comparisons of objects. The matrix B is a weighted sum B = w1 B[1] + … + wn B[n] of the matrices B[l] of group pairwise comparisons for all criteria. The most preferable object O* takes the first place in the general collective ranking R P gr .

3.4 Demonstrative Example: Method RAMPA

63

100 . Verify the null hypothesis on equivalence of all objects. For each criterion m ( )2 ∑ b[l]i − (m − 1)2 is calculated, a value of which is K l , the statistics H = (4/tm) i=1

compared with the Student χ 2 -distribution that has m−1 degrees of freedom. If the 2 inequality H ≥ χ1−α (m − 1) holds for the selected significance level α, for example, α = 0.05, then the statistics H differs significantly from zero, and therefore, objects are considered to be different. 110 . If steps 70 and 100 do not give an acceptable result, then verify the matrices B[l] of pairwise comparisons of objects upon criteria on transitivity. When nontransitive triads of estimates are found, correct the matrices B[l] , assuming their nontransitive elements to be equal. 120 . Construct a corrected collective rankings R l gr of objects for each criterion K l , and verify consistency of the general group preference similarly to steps 60 –90 . 130 . Verify the null hypothesis on presence of correlation between collective rankings of objects for different criteria. For each pair of criteria K q and K p , the Spearman coefficient of rank correlation between the rankings R q gr and R p gr is m ( )2 [ ( )] ∑ riq − ri p , calculated by the formula ρq p = 1 − 6ζq p / m 3 − m , where ζq p = i=1

r iq and r ip are ranks of the object Oi in the rankings R q gr and R p gr , respectively. If there are connected ranks of objects, the Spearman coefficient ρq p

)] [ ( m 3 − m − 6ςq p + 3 τq + τ p = [( )( )]1/2 , m 3 − m − τq m 3 − m − τ p

where τ q and τ p are calculated by the same formula as τ . The rank correlation coefficient ρ qp characterizes a similarity of the two rankings R q gr and R p gr of m objects (it does not matter, for the experts q and p, or for the criteria K q and K p ). The coefficient ρ qp = 1 for the same rankings R q gr and R p gr of objects when r iq = r ip ; ρqp = 0 for the linearly independent rankings; ρ qp = − 1 for the opposite rankings. 130 . Verify the hypothesis on presence or absence of correlation between collective rankings of objects and evaluate a statistical significance of the Spearman coefficient ρ qp using the Student χ2 -distribution that has m−1 degrees of freedom. If 2 (m − 1) holds for the selected significance level α, for the inequality ρq p ≥ χ1−α example, α = 0.05, then the correlation coefficient ρqp differs significantly from zero, and, therefore, the rankings R q gr and R p gr of objects are slightly dependent.

3.4 Demonstrative Example: Method RAMPA Using the RAMPA method, let us construct a collective ranking of ten objects (pupils) O1 , …, O10 , which are compared in pairs by two experts (per semesters) upon eight qualitative criteria (the studied subjects) K 1 , …, K 8 . Hereinafter, assume that all

64

3 Group Ordering of Multi-attribute Objects

experts are equally competent c = 1, and an importance of criteria is the same wl = 1 for all experts. Table 3.6 present the matrices B[1] and B[1] of individual pairwise comparisons of objects by the experts 1 and 2 upon the criterion K 1 . For brevity, the matrices B[l] of individual pairwise comparisons of objects by the experts 1 and 2 upon criteria K 2 –K 8 are not given. Tables 3.7 and 3.8 present, respectively, the matrices B[1] –B[8] of group pairwise comparisons of objects for criteria K 1 –K 8 , and the resulting matrix B of pairwise comparisons of objects. Note that the row sums of elements of the matrix B[l] for any criterion K l and the resulting matrix B of pairwise comparisons of objects can be calculated without forming these matrices themselves. The general group ranking R P gr of objects, which are constructed according to the row sums bi of elements of the resulting matrix B of pairwise comparisons, looks like this: Table 3.6 Matrices B[1] and B[1] of individual pairwise comparisons of objects by the experts 1 and 2 upon the criterion K 1 Expert 1 B[1]

O1

O2

O3

O4

O5

O6

O7

O8

O9

O10

b[1]i

O1

1

1

2

0

1

0

1

1

2

0

9

O2

1

1

2

0

1

0

1

1

2

0

9

O3

0

0

1

0

0

0

0

0

0

0

1

O4

2

2

2

1

2

1

2

2

2

1

17

O5

1

1

2

0

1

0

1

1

2

0

9

O6

2

2

2

1

2

1

2

2

2

1

17

O7

1

1

2

0

1

0

1

1

2

0

9

O8

1

1

2

0

1

0

1

1

2

0

9

O9

0

0

2

0

0

0

0

0

1

0

3

O10

2

2

2

1

2

1

2

2

2

1

17

B[1]

O1

O2

O3

O4

O5

O6

O7

O8

O9

O10

b[1]i

O1

1

2

2

2

1

2

2

1

2

2

17

Expert 2

O2

0

1

2

0

0

0

1

0

0

1

5

O3

0

0

1

0

0

0

0

0

0

0

1

O4

0

2

2

1

0

1

2

0

1

2

11

O5

1

2

2

2

1

2

2

1

2

2

17

O6

0

2

2

1

0

1

2

0

1

2

11

O7

0

1

2

0

0

0

1

0

0

1

5

O8

1

2

2

2

1

2

2

1

2

2

17

O9

0

2

2

1

0

1

2

0

1

2

11

O10

0

1

2

0

0

0

1

0

0

1

5

3.4 Demonstrative Example: Method RAMPA

65

Table 3.7 Matrices B[1] –B[8] of group pairwise comparisons of objects for criteria K 1 −K 8 Criterion K 1 B[1]

O1

O2

O3

O4

O5

O6

O7

O8

O9

O10

b[1]i

O1

2

3

3

2

2

2

3

2

4

2

26

O2

1

2

4

0

1

0

2

1

2

1

14

O3

0

0

2

0

0

0

0

0

0

0

2

O4

2

4

4

2

2

2

4

2

3

3

28

O5

2

3

4

2

2

2

3

2

4

2

26

O6

2

4

4

2

2

2

4

2

3

3

28

O7

1

2

4

0

1

0

2

1

2

1

14

O8

2

3

4

2

2

2

3

2

4

2

26

O9

0

2

4

1

0

1

2

0

2

2

14

O10

2

3

4

1

2

1

3

2

2

2

22

Criterion K 2 B[2]

O1

O2

O3

O4

O5

O6

O7

O8

O9

O10

b[2]i

O1

2

4

4

4

3

2

4

3

4

3

33

O2

0

2

2

0

0

0

2

0

0

0

6

O3

0

2

2

0

0

0

2

0

0

0

6

O4

0

4

4

2

0

0

4

1

4

1

20

O5

1

4

4

4

2

1

4

2

4

2

28

O6

2

4

4

4

3

2

4

3

4

3

33

O7

0

2

2

0

0

0

2

0

0

0

6

O8

1

4

4

3

2

1

4

2

4

2

27

O9

0

4

4

0

0

0

4

0

2

0

14

O10

1

4

4

3

2

1

4

2

4

2

27

Criterion K 3 B[3]

O1

O2

O3

O4

O5

O6

O7

O8

O9

O10

b[3]i

O1

2

4

4

4

3

3

4

2

4

3

33

O2

0

2

0

1

0

0

2

0

0

0

5

O3

0

4

2

3

1

0

4

0

3

1

18

O4

0

3

1

2

1

0

3

0

2

1

13

O5

1

4

3

3

2

1

4

1

4

2

25

O6

1

4

4

4

3

2

4

1

4

3

30

O7

0

2

0

1

0

0

2

0

0

0

5

O8

2

4

4

4

3

3

4

2

4

3

33

O9

0

4

1

2

0

0

4

0

2

0

13

O10

1

4

3

3

2

1

4

1

4

2

25 (continued)

66

3 Group Ordering of Multi-attribute Objects

Table 3.7 (continued) Criterion K 4 B[4]

O1

O2

O3

O4

O5

O6

O7

O8

O9

O10

b[4]i

O1

2

4

4

3

4

4

4

4

4

3

36

O2

0

2

2

0

0

0

0

0

1

0

5

O3

0

2

2

0

0

0

0

0

1

0

5

O4

1

4

4

2

3

3

4

4

4

2

31

O5

0

4

4

1

2

2

3

4

4

1

25

O6

0

4

4

1

2

2

3

4

4

1

25

O7

0

4

4

0

1

1

2

4

4

1

21

O8

0

4

4

0

0

0

0

2

4

0

14

O9

0

3

3

0

0

0

0

0

2

0

8

O10

1

4

4

2

3

3

3

4

4

2

30

Criterion K 5 B[5]

O1

O2

O3

O4

O5

O6

O7

O8

O9

O10

b[5]i

O1

2

3

1

2

2

2

4

3

4

4

27

O2

1

2

0

1

1

1

3

2

3

3

17

O3

3

4

2

3

3

3

4

4

4

4

34

O4

2

3

1

2

2

2

4

3

4

4

27

O5

2

3

1

2

2

2

4

3

4

4

27

O6

2

3

1

2

2

2

4

3

4

4

27

O7

0

1

0

0

0

0

2

1

2

2

8

O8

1

2

0

1

1

1

3

2

3

3

17

O9

0

1

0

0

0

0

2

1

2

2

8

O10

0

1

0

0

0

0

2

1

2

2

8

Criterion K 6 B[6]

O1

O2

O3

O4

O5

O6

O7

O8

O9

O10

b[6]i

O1

2

4

4

1

2

2

3

2

4

2

26

O2

0

2

4

0

0

0

0

0

1

0

7

O3

0

0

2

0

0

0

0

0

0

0

2

O4

3

4

4

2

3

3

4

3

4

3

33

O5

2

4

4

1

2

2

3

2

4

2

26

O6

2

4

4

1

2

2

3

2

4

2

26

O7

1

4

4

0

1

1

2

0

3

1

17

O8

2

4

4

1

2

2

4

2

4

2

27

O9

0

3

4

0

0

0

1

0

2

0

10

O10

2

4

4

1

2

2

3

2

4

2

26 (continued)

3.4 Demonstrative Example: Method RAMPA

67

Table 3.7 (continued) Criterion K 7 B[7]

O1

O2

O3

O4

O5

O6

O7

O8

O9

O10

b[7]i

O1

2

4

4

3

1

0

4

1

4

2

25

O2

0

2

4

1

0

0

4

0

2

2

15

O3

0

0

2

0

0

0

1

0

0

0

3

O4

1

3

4

2

1

0

4

0

3

2

20

O5

3

4

4

3

2

1

4

2

4

2

29

O6

4

4

4

4

3

2

4

3

4

3

35

O7

0

0

3

0

0

0

2

0

0

1

6

O8

3

4

4

4

2

1

4

2

4

3

31

O9

0

2

4

1

0

0

4

0

2

2

15

Criterion K 8 B[8]

O1

O2

O3

O4

O5

O6

O7

O8

O9

O10

b[8]i

O1

2

4

4

3

4

3

4

4

4

4

36

O2

0

2

0

0

0

0

1

0

2

0

5

O3

0

4

2

0

1

1

3

2

4

1

18

O4

1

4

4

2

3

2

4

3

4

3

30

O5

0

4

3

1

2

1

4

3

4

2

24

O6

1

4

3

2

3

2

4

4

4

3

30

O7

0

3

1

0

0

0

2

0

3

0

9

O8

0

4

2

1

1

0

4

2

4

1

19

O9

0

2

0

0

0

0

1

0

2

0

5

O10

0

4

3

1

2

1

4

3

4

2

24

O9

O10

bi 242

Table 3.8 Resulting matrix B of pairwise comparisons of objects B

O1

O2

O3

O4

O5

O6

O7

O8

O1

16

30

29

22

21

18

30

21

32

23

O2

2

16

16

3

2

1

14

3

11

6

74

O3

3

16

16

6

5

4

12

6

12

6

86

O4

10

29

26

16

15

12

31

16

28

19

202

O5

11

30

27

17

16

12

29

19

32

17

210

O6

14

31

28

20

20

16

30

22

31

22

234

O7

2

18

18

1

3

2

16

6

14

6

86

O8

11

29

26

16

13

10

26

16

31

16

194

O9

0

21

20

4

0

1

18

1

16

6

87

O10

9

26

26

13

15

10

26

16

26

16

183

68

3 Group Ordering of Multi-attribute Objects gr

RP ⇔ (O1 > O6 ) > (O5 > O4 ) > (O8 > O10 ) > (O9 .O3 , O7 > O2 ) (3.13) 242 234 210 202 194 183 87 86 74 bi Below the ranking R P gr , the row sum bi for the object Oi is given. Distant groups of objects that have close values of row sums are enclosed in round brackets. The sign . of non-strict superiority indicates a small difference between the objects Oi and Oj for bi −bj 1. The general collective ranking R P agg of objects can be obtained in another way without calculating the resulting matrix B by aggregating individual expert rankings. For example, individual rankings of objects O1 , …, O10 by the experts 1 and 2 upon the criterion K 1 , which are constructed according to the row sums b[1]i of elements of the matrices B[1] and B[1] of individual paired comparisons (Table 3.6), have the following form: R1 ⇔ O4 , O6 , O10 > O1 , O2 , O5 , O7 , O8 > O9 > O3 , R1 ⇔ O1 , O5 , O8 > O4 , O6 , O9 > O2 , O7 , O10 > O3 . The collective ranking R 1 agg of objects O1 , …, O10 for the criterion K 1 is obtained by combining the individual rankings R 1 and R 1 of the experts 1 and 2, or, what is the same, according to the row sums b[1]i of elements of the matrix B[1] (Table 3.7): agg

R 1 ⇔ O4 , O6 > O1 , O5 , O8 > O10 > O2 , O7 , O9 > O3 . Similarly, taking into account the row sums b[l]i of elements of the matrices B[l] , l = 2, …, 8 of group pairwise comparisons (Table 3.7), construct the collective rankings R l agg of objects O1 , …, O10 for other criteria K 2 –K 8 . These rankings combine the individual rankings R l and R l of the experts 1 and 2 as follows:

R agg ⇔ O1 , O6 > O5 .O8 , O10 > O4 > O9 > O2 , O3 , O7 ; 2 R agg ⇔ O1 , O8 > O6 > O5 , O10 > O3 > O4 , O9 > O2 , O7 ; 3 R agg ⇔ O1 > O4 .O10 > O5 , O6 > O7 > O8 > O9 > O2 , O3 ; 4 R agg ⇔ O3 > O1 , O4 , O5 , O6 > O2 , O8 > O7 , O9 , O10 ; 5 R agg ⇔ O4 > O8 .O1 , O5 , O6 , O10 > O7 > O9 > O2 > O3 ; 6 R agg ⇔ O6 > O8 > O5 > O1 > O10 .O4 > O2 , O9 > O7 > O3 ; 7

3.4 Demonstrative Example: Method RAMPA

69

R agg ⇔ O1 > O4 , O6 > O5 , O10 > O8 .O3 > O7 > O2 , O9 . 8 The sign . of non-strict superiority indicates a small difference between the objects Oi and Oj for b[l]i − b[l]j = 1. Using the Goodman–Markowitz procedure, find the general collective ranking R P agg of objects, that combines collective rankings R l agg for all criteria K 1 , …, K n ,:

R agg ⇔ (O1 .O6 ) > (O4 > O5 > O8 ) > O10 > (O3 > O9 .O7 > O2 ) P (3.14) 21 22 30 31.5 33.5 41 62.5 64.5 65.5 68.5 ri Below the ranking R P agg , a rank r∑ i for the object Oi is given. This rank is determined agg n rl (Oi ), where r l agg (Oi ) is a rank (place) of by the places’ sum rule (2.3) r i = l=1 agg the object Oi in the ranking R l . Distant groups of objects that have close ranks are enclosed in round brackets. The sign . of non-strict superiority indicates a small difference between the objects Oi and Oj for r i − r j = 1. Note that the ordering of objects in accordance with voting procedures is less accurate due to the low “resolution ability” of the place’s sum rule. For clarity, show the collective rankings R P gr and R P agg of objects O1 , …, O10 , where the scales of the row sum bi and the place sum r i are indicated below the rankings. It can be seen that the rankings R P gr and R P agg obtained in different ways almost coincide. Three distinct groups of closed objects are distinguished that are ‘good’, ‘middle’ and ‘bad’ objects. According to the aggregated assessments of all experts upon all criteria, the best object is O1 , and the second object by preference is O6 . Other objects are significantly worse than objects O1 and O6 . The worst object is O2 . O7 gr RP ⇔ O1 O6 O5 O4 O8 O10 O9 O3 O2 bi 240 220 200 180 160 140 120 100 80 60 agg

RP ri

O6 O7 ⇔ O1 O4 O5 O8 O10 O3 O9 O2 20 30 40 50 60 70

In practical problems solved by the RAMPA method, when adding matrices B[l] =

+ . . . + c B[l] of individual pairwise comparisons, non-transitive c B[l] triads of object estimates are often “absorbed” by transitive triads present in different matrices B[l] . Therefore, in fact, in the matrices B[l] of group pairwise comparisons of objects for criteria, there were significantly fewer non-transitive triads of estimates than in the original matrices B[l] . Exclusion of non-transitivity in some expert’s assessments improved statistical characteristics of the result, but did not significantly affect the group ranking R l gr of objects generated by the matrices B[l] of pairwise comparisons.

70

3 Group Ordering of Multi-attribute Objects

In the RAMPA method, element values of the resulting matrix B of pairwise comparisons of objects do not depend on the summation order (by experts or upon criteria) of elements of the matrices of individual and group pairwise comparisons. Therefore, the resulting group rankings R P gr or R P agg of objects will coincide. Thus, the RAMPA method for group ordering of objects satisfies the principle of invariance of individual preference’s aggregation.

3.5 Group Ordering of Objects by Proximity to Reference Points: Method ARAMIS Group ranking of m multi-attribute objects O1 , …, Om , that are evaluated by t decision makers/experts upon n criteria K 1 , …, K n with numerical and/or verbal scales X l = {x l 1 , …, xlhl }, l = 1, …, n, expresses collective preference/knowledge of members of a decision making group, which represents individual preferences/knowledge of actors into the aggregated form. The method for aggregation and ranking alternatives close to multi-attribute ideal situations (ARAMIS) is intended for collective ordering of objects presented in several versions by their proximity to reference points in a multi-attribute space [4–6, 8, 9, 27]. The method is based on aggregation of individual preferences of several experts, which are expressed by object assessments upon many criteria with numerical and/or verbal scales. Multi-attribute objects O1 , …, Om are represented by multisets and considered to be points of a multiset metric space (A, d), A = {A1 , …, Am }, d is one of the Petrovsky metrics (3.3–3.6) that characterizes the objects’ closeness in an attribute space. The method allows us to get a group ranking of multiattribute objects without preconstruction of individual rankings and consists of the following steps. 10 . Each expert s, s = 1, …, t evaluates all objects O1 , …, Om upon each criterion K l that has a rating scale X l = {x l 1 , …, x l hl }, l = 1, …, n of quantitative or qualitative estimates. The numerical scale of any criterion may be point or otherwise. The verbal scale may be disordered or ordered. On each ordinal scale, preference of gradations is given, for example, xl1 > xl2 > . . . > xlhl . 20 . When criteria K 1 , …, K n have different importances (weights) for a superdecision maker and/or experts, determine an importance wl of the criterion K l , for example, using one of the procedures [7, 26]. Weights of criteria can be normalized n ∑ wl = 1. as follows: l=1

30 . When experts have different competencies and/or influences, introduce a competency indicator c of the expert s, for example, based on results of a mutual evaluation of experts or an opinion of a super-decision maker [7]. Indicators of expert t ∑ competencies can be normalized as follows: c = 1. s=1

40 . Present a version of the object Oi evaluated by the expert s as a multiset

3.5 Group Ordering of Objects by Proximity …

71

( ) { ( 1 )1 x1h 1 ◦ x1h 1 ; . . . ; x1 ◦ x11 , . . . , k Ai = k Ai Ai ( 1) ( h ) } k xn ◦ xn1 , . . . , k xn n ◦ xnh n Ai Ai h e over the set X = {x 1 , …, { x } of attributes x into the}form (3.1) or over the set h1 1 1 X = X 1 ∪ . . . ∪ X n = x1 , . . . , x1 , . . . ; xn , . . . , xnh n of estimate grades xlel on criteria scales K 1 , …, K n into the form (3.2). The multiset Ai characterizes an individual assessment of the object Oi given by the expert s upon criteria K 1 , …, K n . The multiplicity k Ai (xlel ) = 1 if the expert s gave the estimate xlel ∈ X l , el = 1, …, hl upon the criterion K l to the object Oi , and k Ai (xlel ) = 0 if the estimate xlel was not marked. 50 . Present the object Oi evaluated by all experts as a multiset

Ai =

t ∑

c Ai

s=1

( ) } ( ) ( ) ( ) = k Ai x11 ◦ x11 , . . . , k Ai x1h 1 ◦ x1h 1 , . . . ; k Ai xn1 ◦ xn1 , . . . , k Ai xnh n ◦ xnh n . {

The multiset Ai characterizes an aggregated assessment of the object Oi and is a weighted sum of multisets Ai of individual expert estimates of the object Oi . t ( ) ∑ ) ( The multiplicity k Ai xlel = c k Ai xlel is equal to a weighted sum of numbers s=1

of experts, who has the competence c and gave the estimate xlel ∈ X l upon the criterion K l to the object Oi . 60 . Specify two reference points (ideal situations) that are the most preferable object O+ , to which all t experts gave only the best estimate x l 1 upon all criteria, and the least preferable object O– , to which all t experts gave only the worst estimates x l hl upon all criteria. The best object O+ and the worst object O– correspond to multisets } { A+ = t ◦ x11 , . . . , 0 ◦ x1h 1 , . . . ; t ◦ xl1 , . . . , 0 ◦ xlhl , . . . ; t ◦ xn1 , . . . , 0 ◦ xnh n , } { A− = 0 ◦ x11 , . . . , t ◦ x1h 1 , . . . ; 0 ◦ xl1 , . . . , t ◦ xlhl , . . . ; 0 ◦ xn1 , . . . , t ◦ xnh n of attributes. 70 . For each object Oi , calculate, using, for example, the metric d 11 (Oi , Oj ) (3.7), the distance to the best object O+ : d+ (Oi ) = d11 (Oi , O+ ) =

n ∑

wl

l=1

and the distance to the worst object O– :

hl ∑ | ( el ) ( )| |k Ai x − k A+ x el |, l

el =1

l

(3.15)

72

3 Group Ordering of Multi-attribute Objects

d− (Oi ) = d11 (Oi , O− ) =

n ∑ l=1

wl

hl ∑ | ( el ) ( )| |k Ai x − k A− x el |. l

l

(3.16)

el =1

80 . For each object Oi , calculate the indicator l(Oi ) = d + (Oi )/[d + (Oi ) + d – (Oi )] of proximity to the best object O+ . Obviously, the indicator l(O+ ) = 0 for the object O+ , and the indicator l(O– ) = 1 for the object O– . 90 . Construct a collective ranking R A+ gr of objects by preference. Objects O1 , …, Om are ordered from best to worst in increasing the indicator l(Oi ) of proximity of the object Oi to the best object O+ . The most preferable object O* is determined by the minimum value of proximity indicator l(Oi ) and takes the first place in the collective ranking R A+ gr . In practice, rating scales X l = {x l 1 , …, x l hl } of criteria K 1 , …, K n may have unequal numbers of gradations hl . In such situations, for comparability of object assessments upon different criteria and convenience of a result interpretation, we recommend to coordinate criteria scales and adduce them to the same{ “length” h l∼ = } ∼ 1∼ h∼ 0 , h∼ 0 for all l = 1, …, n. To do this, we specify the uniform scale X l = xl , . . . , x common for all criteria, with the same estimate gradations ranked by preference, for e∼ h∼ example, as xl1∼ > . . . > xl l > . . . > xl 0 . Depending on the specifics of a task, this can be done in many different ways. For example, “to stretch” a “shorter” scale by introducing fictitious ( ) intermediate el (0) el (0) = 0, or “to between existing gradations and taking k Ai xl gradations xl el (1) el (2) compress” a “longer” scale by combining several neighboring ( gradations ) ( xl ), xl , ( ) ∼ … in a single gradation xlel ∼ and taking k Ai xlel = k Ai xlel (1) + k Ai xlel (2) + . . . One can cojointly apply both of these ways for coordination of rating scales. The ARAMIS method provides a variety of options for collective ordering of multi-attribute objects. Together with the descending ranking R A+ gr of objects, we can construct an ascending ranking R A– gr of objects, where objects are ordered from worst to best in increasing the indicator h(Oi ) = d – (Oi )/[d + (Oi ) + d – (Oi )] of remoteness of the object Oi from the worst object O– . Obviously, the indicator h(O+ ) = 1 for the object O+ , and the indicator h(O– ) = 0 for the object O– . The most preferable object O* is determined by the maximum value of remoteness indicator h(Oi ) and takes the last place in the collective ranking R A– gr . Since the equality l(Oi ) + h(Oi ) = 1 = const holds for all i = 1, …, m, the rankings R A+ gr and R A– gr of objects coincide. Objects O1 , …, Om can be ordered not only by values of the indicators l(Oi ) or h(Oi ), but also by increasing the distance d + (Oi ) to the best object O+ or by decreasing the distance d – (Oi ) to the worst object O– . However, in these cases, the descending R A+ agg and ascending R A– agg collective rankings of objects do not necessarily coincide with each other. The collective ranking R A agg of multi-attribute objects can also be constructed by combining individual rankings R A of object versions Oi , s = 1, …, t obtained by the ARAMIS method, using any voting procedure. Choice of a metric to calculate the object proximity can have a significant impact on the final ordering of objects.

3.6 Demonstrative Example: Method ARAMIS

73

3.6 Demonstrative Example: Method ARAMIS Using the ARAMIS method, let us rank ten objects (pupils) O1 , …, O10 , which are evaluated by two experts (per two semesters) upon eight qualitative criteria (the studied subjects) K 1 , …, K 8 . All criteria have the same five-point rating scale X = {x 1 , x 2 , x 3 , x 4 , x 5 }, where x 1 is 1/very bad, x 2 is 2/bad, x 3 is 3/satisfactory, x 4 is 4/good, x 5 is 5/excellent. Estimate grades are ordered by preference as x 1 ≺ x 2 ≺ x 3 ≺ x 4 ≺ x 5 . All experts are equally competent c = 1, and an importance of criteria is the same wl = 1 for all experts. Versions of multi-attribute objects O1 , …, O10 , represented by multisets of expert estimates into the form (3.1), are given in the data matrix G< > (Table 3.4). The objects are defined by sums of multisets Ai , Ai , i = 1, …, 10 of the attributes x 1 , x 2 , x 3 , x 4 , x 5 as follows: } { A1 = A + A = 0 ◦ x 1, 0 ◦ x 2, 0 ◦ x 3, 4 ◦ x 4, 4 ◦ x 5 1 1 } { } { + 0◦ x 1 , 0 ◦ x 2 , 0 ◦ x 3 , 3 ◦ x 4 , 5◦ x 5 = 0 ◦ x 1 , 0 ◦ x 2 , 0 ◦ x 3 , 7 ◦ x 4 , 9 ◦ x 5 , (3.17) } { A2 = A + A = 4 ◦ x 1, 6 ◦ x 2, 4 ◦ x 3, 2 ◦ x 4, 0 ◦ x 5 , 2 2 } { A3 = A + A = 8 ◦ x 1, 2 ◦ x 2, 3 ◦ x 3, 2 ◦ x 4, 1 ◦ x 5 , 3 3 } { A4 = A + A = 0 ◦ x 1, 1 ◦ x 2, 3 ◦ x 3, 7 ◦ x 4, 5 ◦ x 5 , 4 4 { } A5 = A + A = 0 ◦ x 1 , 0 ◦ x 2 , 1 ◦ x 3 , 11 ◦ x 4 , 4 ◦ x 5 5 5 } { A6 = A6 + A = 0 ◦ x 1, 0 ◦ x 2, 0 ◦ x 3, 9 ◦ x 4, 7 ◦ x 5 , 6 } { A7 = A7 + A = 3 ◦ x 1, 5 ◦ x 2, 5 ◦ x 3, 3 ◦ x 4, 0 ◦ x 5 , 7 } { A8 = A8 + A = 0 ◦ x 1, 1 ◦ x 2, 3 ◦ x 3, 7 ◦ x 4, 5 ◦ x 5 , 8 } { A9 = A9 + A = 1 ◦ x 1, 7 ◦ x 2, 7 ◦ x 3, 1 ◦ x 4, 0 ◦ x 5 , 9 } {

A10 = A10 + A = 0 ◦ x 1, 2 ◦ x 2, 3 ◦ x 3, 6 ◦ x 4, 5 ◦ x 5 . 10 The best O+ and the worst O– objects are defined by multisets } { A+ = 0 ◦ x 1 , 0 ◦ x 2 , 0 ◦ x 3 , 0 ◦ x 4 , 16 ◦ x 5 ,

} { A− = 16 ◦ x 1 , 0 ◦ x 2 , 0 ◦ x 3 , 0 ◦ x 4 , 0 ◦ x 5 .

74

3 Group Ordering of Multi-attribute Objects

Table 3.9 Distances from objects to the reference points, indicators of proximity and remoteness of objects O1

O2

O3

O4

O5

O6

O7

O8

O9

O10

d + (Oi )

14

32

30

22

24

18

32

22

32

22

d – (Oi )

32

24

16

32

32

32

26

32

30

32

l(Oi )

0.304

0.571

0.652

0.407

0.429

0.360

0.552

0.407

0.516

0.407

h(Oi )

0.696

0.429

0.348

0.593

0.571

0.640

0.448

0.593

0.484

0.593

Table 3.9 shows the distance d + (Oi ) from the object Oi to the best object O+ , the distance d – (Oi ) from the object Oi to the worst object O– , the indicators of proximity l(Oi ) and remoteness h(Oi ) for the object Oi , calculated by formulas (3.15, 3.16). The group ranking R A+ gr of objects, obtained by the indicator li = l(Oi ) of proximity of the object Oi to the best object O+ , is as follows:

R gA+ ⇔ O1 > O6 > (O4 , O8 , O10 > O5 ) > O9 > (O7 > O2 ) > O3 li · 10−3 304 360 407 429 516 552 571 652 (3.18) Below the ranking R A+ gr , the proximity indicator l i for the object Oi is given. Distant groups of objects that have close proximity indicators are enclosed in round brackets. Construct now the collective ranking R A agg of multi-attribute objects in another way by aggregating the individual rankings R A of the expert 1 and R A of the expert 2 obtained for objects O1 , …, O10 by the ARAMIS method. The best O+ and the worst O– objects for each expert are defined by the following multisets of attributes: } } { { A = 0 ◦ x 1 , 0 ◦ x 2 , 0 ◦ x 3 , 0 ◦ x 4 , 8 ◦ x 5 , A = 8 ◦ x 1, 0 ◦ x 2, 0 ◦ x 3, 0 ◦ x 4, 0 ◦ x 5 . + −

Individual rankings of objects by the proximity indicator li are:

R ⇔ O10 > O1 , O6 > O4 > O8 > O5 > O9 > O2 , O7 > O3 A li .10−3 273 333 385 429 467 533 571 727 R ⇔ O1 > O5 , O6 , O8 > O4 > O10 , O9 > O7 > (O2 > O3 ) A 273 385 429 500 533 571 > 583 li .10−3 Using the Goodman–Markowitz procedure, find the collective ranking R A agg of objects that combines the individual rankings R A and R A :

R agg A ⇔ O1 > O6 > (O10 .O8 > O4 , O5 ) > O9 > (O7 .O2 ) > O3 ri 3.5 5.5 7.5 8 9 13.5 16.5 17.5 20

(3.19)

3.6 Demonstrative Example: Method ARAMIS

75

Below the ranking R A agg , a rank r i for the object Oi is given. This rank is calculated by the places’ sum rule (2.3) r i = r (Oi ) + r (Oi ), where r (Oi ) is a rank (place) of the object Oi in the corresponding individual ranking R A . Distant groups of objects that have close ranks are enclosed in round brackets. The ranks of objects O10 and O8 , O7 and O2 differ by no more than 1, that is indicated by sign . of non-strict superiority. Comparison of the group rankings R A+ gr (3.18) and R A agg (3.19) of objects, which characterize collective preferences of experts, shows that both of them almost completely coincide with an exception of small differences in the middle parts, although these rankings were obtained in different ways. There are clearly marked gaps between the ‘good’ objects O1 and O6 , the ‘middle’ objects O10 , O8 , O4 , O5 , and the ‘bad’ objects O9 , O7 , O2 , O3 . Therefore, group orderings of objects can also be considered as group ordinal classifications, where the classes of objects and object placement in the classes are specified by rankings. For clarity, let us show the collective rankings R A+ gr and R A agg of objects O1 , …, O10 , where, respectively, scales of the proximity indicator li and the place sum r i are indicated below the rankings. Groups of close objects are clearly visible. According to the aggregated assessments of all experts upon all criteria, the best object is O1 that also tooks the first places in both group rankings. The second object by preference is O6 . Other objects are worse than objects O1 and O6 . The worst object is O3 . Note that the opinion of the expert 1 on the best object is different from the collective preference, and the opinion of the expert 2 coincides with the collective judgment. The opinions of both experts on the worst object are the same.

gr

R A+ ⇔ O1 O6 li 0.300

O8 O10 O4 O5 O9 O7 O2 O3 0.400 0.500 0.600

O8 O4 agg R A ⇔ O1 O6 O10 O5 O9 O7 O2 O3 . 5 10 15 20 ri Note also that the group rankings R A+ gr (3.18) and R A agg (3.19) of multi-attribute objects according to the ARAMIS method practically do not differ from the rankings R P gr (3.13) and R P agg (3.14) of objects, obtained by the RAMPA method, with an exception of determination of the worst object. In the ARAMIS method, all multisets representing objects and reference points, as well as distances (3.15, 3.16) between objects, are formed using only summation of individual estimates by experts and upon criteria. Therefore, values of indicators of proximity and remoteness of objects, and hence the calculation results, do not depend on the order of estimate’s summation. Thus, the ARAMIS method for group ordering of objects satisfies the principle of invariance of individual preference’s aggregation.

76

3 Group Ordering of Multi-attribute Objects

References 1. Petrovsky, A.B.: Mul’timnozhestva kak model’ predstavleniya mnogopriznakovykh ob”yektov v prinyatii resheniy i raspoznavanii obrazov (Multisets as a model for representation of multiattribute objects in decision making and pattern recognition). Iskusstvenniy Intellect (Artificial Intelligence) 2, 236–243 (2002). (in Russian) 2. Petrovsky, A.B.: Inconsistent preferences in verbal decision analysis. In: Papers from IFIP WG8.3 International Conference on Creativity and Innovation in Decision Making and Decision Support, vol. 2, pp. 773–789. Ludic Publishing Ltd, London (2006) 3. Petrovsky, A.B.: Multiple criteria decision making: discordant preferences and problem description. J. Syst. Sci. Syst. Eng. 16(1), 22–33 (2007) 4. Petrovsky, A.B.: Group verbal decision analysis. In: Encyclopedia of decision making and decision support technologies, vol. 1, pp. 418–425. Hershey, New York, IGI Global (2008) 5. Petrovsky, A.B.: Clustering and sorting multi-attribute objects in multiset metric space. In: The Forth International IEEE Conference on Intelligent Systems. Proceedings, vol. 2, pp. 11–44– 11–48. Sofia, IEEE (2008) 6. Petrovsky, A.B.: Group sorting and ordering multiple criteria alternatives. In: Computational Intelligence in Decision and Control. Proceedings of the 8-th International FLINS Conference, pp. 605–610. Singapore, World Scientific Publisher (2008) 7. Petrovsky, A.B.: Teoriya prinyatiya resheniy (Theory of decision making). Publishing Center “Academy”, Moscow (2009). (in Russian) 8. Petrovsky, A.B.: Group multiple criteria decision making: multiset approach. In: Recent Developments and New Directions in Soft Computing. Studies in Fuzziness and Soft-Computing, vol. 317, pp. 19–33. Switzerland Springer International Publishing (2014) 9. Petrovsky, A.B.: Gruppovoy verbal’niy analiz resheniy (Group verbal decision analysis). Nauka, Moscow (2019).(in Russian) 10. Petrovsky, A.B.: Prostranstva mnozhestv i mul’timnozhestv (Spaces of sets and multisets). Editorial URSS, Moscow (2003).(in Russian) 11. Petrovsky, A.B.: Indicators of similarity and dissimilarity of multi-attribute objects in the metric spaces of sets and multisets. Sci. Tech. Inf. Process. 45(5), 331–345 (2018) 12. Petrovsky, A.B. Proximity of multi-attribute objects in multiset metric spaces. In: Intelligent information technologies for industry. Proceedings of the 3rd international conference. Advances in Intelligent Systems and Computing, vol. 874, pp. 59–69. Springer Nature Switzerland AG (2019). 13. Deza, M.M., Deza, E.: Encyclopedia of distances. Springer-Verlag, Berlin (2009) 14. Deza, M.M., Laurent, M.: Geometry of cuts and metrics. Springer-Verlag, Berlin (1997) 15. Marczewski, E., Steinhaus, H.: On a certain distance of sets and the corresponding distance of functions. Colloq. Math. 6, 319–327 (1958) 16. O’Searcoid, M.: Metric spaces. Springer-Verlag, London (2009) 17. Petrovsky, A.B. Cluster analysis in multiset spaces. In: Information systems technology and its applications. Lecture notes in Informatics, vol. 30, pp. 109–119. Gesellschaft für Informatik, Bonn (2003) 18. Petrovsky, A.B. (2005). Novye klassy metricheskikh prostranstv izmerimykh mnozhestv i mul’timnozhestv v klasternom analize (New classes of metric spaces of measurable sets and multisets in cluster analysis). In: Metody podderzhki prinyatiya resheniy. Trudy Instituta sistemnogo analiza RAN (Methods of decision support. Proceedings of the Institute for System Analysis of the Russian Academy of Sciences), vol. 12, pp. 54–67. Editorial URSS, Moscow (2005). (in Russian) 19. Semkin, B.I., Dvoichenkov, V.I. Ob ekvivalentnosti mer skhodstva i razlichiya (On the equivalence of similarity and difference measures). In: Issledovanie sistem. Tom 1. Analiz slozh-nykh system (Investigation of Systems. Volume 1. Analysis of Complex Systems), pp. 95–104. Far East Scientific Center of the USSR Academy of Sciences, Vladivostok (1973) (in Russian) 20. Sneath, P.H.A., Sokal, R.R.: Numerical taxonomy: the principles and practice of numerical classification. Freeman, San Francisco (1973)

References

77

21. Petrovsky, A.B.: Combinatorics of Multisets. Dokl. Math. 61(1), 151–154 (2000) 22. Petrovsky, A.B.: Prostranstva izmerimykh mnozhestv i mul’timnozhestv (Spaces of measurable sets and multisets). Poly Print Service, Moscow (2016). (in Russian) 23. Petrovsky, A.B.: Teoriya izmerimykh mnozhestv i mul’timnozhestv (Theory of measurable sets and multisets). Nauka, Moscow (2018). (in Russian) 24. Hartigan, J.A.: Clustering algorithms. Wiley, New York (1975) 25. Petrovsky, A.B., Raushenbach, G.V., Pogodaev, G.V. Mnogokriterial’nie ekspertnie otsenki pri formirovanii nauchnoy politiki (Multicriteria expert assessments in the formation of scientific policy). In: Problemy i metody prinyatiya resheniy v organizatsionnykh sistemakh upravleniya (Problems and methods of decision making in organizational management systems). Abstracts of the All-Union Conference, pp. 84–85. VNIISI (All-Union Scientific Institute for System Research), Moscow, Zvenigorod (1981). (in Russian) 26. Podinovskiy, V.V.: Idei i metody vazhnosti kriteriev v mnogokriterial’nykh zadachakh prinyatiya resheniy (Ideas and methods of the criteria importance in multicriteria decision making problems). Nauka, Moscow (2019). (in Russian) 27. Petrovsky, A.B. Multiple criteria ranking enterprises based on inconsistent estimations. In: Information systems technology and its applications. Lecture notes in Informatics, vol. 84, pp. 143–151. Gesellschaft für Informatik, Bonn (2006).

Chapter 4

Group Classification of Multi-attribute Objects

This chapter describes original methods of group verbal analysis for classifying multi-attribute objects. These are techniques for group multicriteria classification of objects without teachers by feature proximity and with teachers by aggregated decision rules. We demonstrate examples how to solve model tasks of collective multicriteria choice with the developed methods.

4.1 Group, without Teachers, Classification of Objects by Feature Proximity: Methods CLAVA-HI and CLAVA-NI In the task of collective classification without teachers, it is required to allocate m multi-attribute objects O1 ,…, Om into several classes (clusters, categories) D1 ,…, Dg . A number g of classes can be fixed beforehand or not fixed and determined during solving the problem. Objects are evaluated by t decision makers/experts upon n criteria K 1 ,…, K n with numerical and/or verbal scales X l = {x l 1 ,…, xlhl }, l = 1,…, n. Thus, there are t different versions of each object Oi , i = 1,…, m. Group classification of objects expresses collective preference/knowledge of members of a decision making group (DMG), which represents individual preferences/knowledge of actors into the aggregated form, and will be ordinal or nominal, depending on whether estimation grades are ordered or not on the criteria scales. Tasks of classification without a teacher are traditionally studied in cluster analysis [1–5]. Objects are considered to be points of an attribute space, where classes are formed by sequential grouping of the closest objects. The objects’ closeness in an attribute space is characterized by either a difference or similarity of their features. When forming classes, the following approaches are used: minimize difference (maximize similarity) between objects within the class; maximize difference (minimize similarity) between classes of objects. The principal aspects of clustering are: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. B. Petrovsky, Group Verbal Decision Analysis, Studies in Systems, Decision and Control 451, https://doi.org/10.1007/978-3-031-16941-0_4

79

80

4 Group Classification of Multi-attribute Objects

• formalization of the concept of differences/similarities between objects in an attribute space; • choice of way for grouping objects; • reasonable interpretation of the obtained classes of objects. Typically, multi-attribute objects are represented by vectors, crisp or fuzzy sets of numerical attributes, and object classes (clusters) are formed by adding the corresponding vectors or combining the sets. Several vectors, consisting of an object group, are replaced with a single vector, which is, for example, the closest to all vectors within a group, the center of a group, or vector with averaged or weighted values of components of all vectors in a group. Note, however, that features of objects forming a group may be lost after such a replacement. At the same time, for objects represented by tuples with verbal attributes, the operations of averaging, weighting, mixing data, and the like transformations are mathematically incorrect and unacceptable. Thus, several tuples, consisting of an object group, can not be replaced with one single tuple. Let us discuss the main ideas of collective cluster analysis for objects, which are present in several different versions, described by many quantitative and/or qualitative characteristics, and represented by multisets of numerical and/or verbal attributes into the form (3.1) or (3.2) [6–13]. Variety of operations on multisets allows us to apply different ways for aggregating such objects into clusters (groups): the addition, union, intersection of multisets describing objects and their versions, or a linear combination of these operations. Consider multi-attributes objects O1 ,…, Om to be points of the multiset metric space (A, d), A = {A1 ,…, Am }, d is the Petrovsky metric (3.3)–(3.6). For simplicity, assume below, although it is not necessary, that all distances in an attribute space are given by one and the same metric d. Namely, a distance d (Oi , Oj ) between the objects Oi and Oj , i, j = 1,…, m within any cluster Df , f = 1, …, g; a distance d(Oi , Df ) between the object Oi and the cluster Df ; a distance d(Du , Dv ) between the clusters Du and Dv . The method for hierarchical cluster analysis of verbal alternatives (CLAVA-HI) is intended for a collective, without teachers, classification of objects, present in several versions, by proximity of their features, when a number of formed clusters is not fixed beforehand [11, 13–17]. The method is based on aggregation of individual preferences of several experts, which are expressed by object assessments upon many criteria with numerical and/or verbal scales, and objects are represented by multisets. The method allows building a group ordinal or nominal classification of multi-attribute objects without preconstruction of individual classifications and includes the following steps. 10 . Each expert s, s = 1,…, t evaluates all objects O1 ,…, Om upon each criterion K l that has a rating scale x l = {x l 1 ,…, xlhl }, l = 1,…, n of quantitative or qualitative estimates. The numerical scale of any criterion may be point or otherwise. The verbal scale may be disordered or ordered. On each ordinal scale, preference of gradations is given, for example, xl1 . xl2 . . . . . xlhl .

4.1 Group, without Teachers, Classification of Objects by Feature Proximity …

81

20 . When criteria K 1 ,…, K n have different importances (weights) for a superdecision maker and/or experts, determine an importance wl of the criterion K l , for example, using ∑ n one of the procedures [18, 19]. Weights of criteria can be normalized as follows: l=1 wl = 1. 30 . When experts have different competencies and/or influences, introduce a competency indicator c of the expert s, for example, based on results of a mutual evaluation of experts or an opinion of a super-decision ∑ maker [18]. Indicators of expert competencies can be normalized as follows: ts=1 c = 1. 40 . Present a version of the object Oi evaluated by the expert s as a multiset ( ) { ( 1) x1h 1 ◦ x1h 1 ; . . . ; x1 ◦ x11 , . . . , k Ai = k Ai Ai ( ) ( hn ) } 1 k Ai xn ◦ xn1 , . . . , k xn ◦ xnh n Ai over the set X = {x 1 ,…,{ x h } of attributes x e into the}form (3.1) or over the set X = X 1 ∪ . . . ∪ X n = x11 , . . . , x1h 1 ; . . . ; xn1 , . . . , xnh n of estimate grades xlel on criteria scales K 1 ,…, K n into the form (3.2). The multiset Ai characterizes an individual assessment of the object Oi given by the expert s upon criteria K 1 ,…, K n . The multiplicity k Ai (xlel ) = 1 if the expert s gave the estimate xlel ∈ X l , el = 1,…, hl upon the criterion K l to the object Oi , and k Ai (xlel ) = 0 if the estimate xlel was not marked. 50 . Present the object Oi evaluated by all experts as a multiset Ai =

t ∑

( ) { ( ) c Ai = k A i x11 ◦ x11 , . . . , k A i x1h 1 ◦ x1h 1 , . . . ;

s=1

( ) ( ) } k A i xn1 ◦ xn1 , . . . , k A i xnh n ◦ xnh n .

The multiset Ai characterizes an aggregated assessment of the object Oi and is a

of individual weighted sum of( multisets ( el ) expert estimates of the object Oi . The ∑ tAi el ) xl is equal to a weighted sum of numbers multiplicity k Ai xl = s=1 c k Ai of experts, who has the competence c and gave the estimate xlel ∈ X l upon the criterion K l to the object Oi . 60 . Set g = m, g is a number of clusters, m is a number of objects. Then each cluster Di consists of a single object Oi for all i = 1,..., g, and a multiset ( ) { C i = kCi (x11 ) ◦ x11 , . . . , k C i x1h 1 ◦ x1h 1 , . . . ; ( ) ( ) k C i xn1 ◦ xn1 , . . . , k C i xnh n ◦ xnh n } ,

(4.1)

presenting the cluster Di , coincides with the multiset Ai . 70 . Calculate distances d (Di , Dj ) between the clusters Di and Dj for all 1 ≤ i, j ≤ g, i /= j, using one of the metrics d in a multiset space. 80 . Find the closest clusters Du and Dv that satisfy the condition

82

4 Group Classification of Multi-attribute Objects

) ( d(Du , Dv ) = min d Di , D j , i, j

(4.2)

and generate a new cluster Dr , using one of the ways for grouping multisets that defines clusters as sum C r = C u + C v , union ∑ C r = C u ∪ C v ,∪ intersection C r = C u ∩ C v , or a linear combination C r = f b f • C f , Cr = f b f • C f , Cr = ∩ el f b f • C f , bf is an integer. A multiplicity k Cr (xl ) of the multiset C r is calculated depending on the way for generation of the cluster Dr . 90 . Diminish a number of clusters by unit: g = n − 1. If g = 1, then output the result and stop. If g > 1, then, for the clusters Di and Dr , 1 ≤ i, r ≤ g, i /= r, go back to step 70 . We can depict a sequential aggregation of objects into classes as a hierarchical tree or dendrogram. New clusters Du and Dv are formed while moving from a tree root along tree branches, at each step passing to one of the nearest clusters Dr . A process of hierarchical clustering objects is ended when all objects are combined into several classes or a single class, depending on the problem considered. A procedure can also be interrupted at some step when a certain rule is fulfilled, for instance, when the value of differences between objects exceeds some predetermined threshold level. Let us make some comments. At step 80 , there may be many pairs of the closest clusters Du , Dv that are equivalent according to condition (4.2) of the minimum distance d(Di , Dj ) in an attribute space of multisets. Thus, several different branch points of the procedure appear, in which classes of objects are formed, and, therefore, it is possible to construct various resulting dendrograms. The use of distance d 31 (3.7) in condition (4.2) leads to a smaller number of branch points compared to distances d 11 and d 21 , the use of which gives similar results. Besides, if new clusters are formed by addition of multisets, then a number of clusters will be the smallest; if by intersection of multisets, then a number of clusters will be the largest. To reduce a number of possible variants for merging clusters (branch points of the algorithm), one can introduce several criteria of objects’ closeness. For example, one can take into account not only condition (4.2) of the minimum distance, but also the maximum compactness of a cluster. In this case, a modified method of hierarchical clustering objects looks as follows. 8.10 . Find all equivalent pairs of the closest clusters Du , Dv , represented by multisets C u , C v , that satisfy condition (4.2) and are located at the same distance d(Du , Dv ). Form new clusters Dr fr , f r = 1,…, gr for all equivalent pairs of clusters using one of the operations: summation, union, or intersection of multisets. 8.20 . Find the most compact cluster Dr∗ , which is represented by the multiset Cr∗ and is given by the criterion ∑ ( ) ( ) J Dr∗ = min d Oi , O j /Ir fr , f

i, j

where Ir fr is a number of objects within the cluster Dr fr . Go to step 90 .

(4.3)

4.1 Group, without Teachers, Classification of Objects by Feature Proximity …

83

Numerical experiments have shown that multicriteria hierarchical clustering objects, obtained by using additional conditions conjointly with condition (4.2), can significantly improve the results for all ways of cluster formation. In clustering methods, it is customary to form a new cluster at each step from only one pair of close objects/clusters. However, there are no fundamental prohibitions on simultaneous formation of new clusters Drt , t = 1, …, gr for all equivalent pairs of objects/clusters satisfying condition (4.2). Therefore, at step 80 , we can construct not one, but all new equivalued clusters at once and go to step 90 , where we can diminish a number of clusters not by one unit, but by several units. Such a technique significantly speeds up a construction of a tree. The method for non-hierarchical cluster analysis of verbal alternatives (CLAVANI) is intended for a collective, without teachers, classification of objects, present in several versions, by proximity of their features, when a number of formed clusters is fixed and determined beforehand [11, 13–17]. The method is based on aggregation of individual preferences of several experts, which are expressed by object assessments upon many criteria with numerical and/or verbal scales and objects are represented by points of the Petrovsky multiset metric space. The method allows building a group ordinal or nominal classification of multi-attribute objects without preconstruction of individual classifications and consists of the following steps. 10 . Each expert s, s = 1,…, t evaluates all objects O1 ,…, Om upon each criterion K l that has a rating scale X l = {x l 1 ,…, xlhl }, l = 1,…, n of quantitative or qualitative estimates. The numerical scale of any criterion may be point or otherwise. The verbal scale may be disordered or ordered. On each ordinal scale, preference of gradations is given, for example, xl1 . xl2 . . . . . xlhl . 20 . When criteria K 1 ,…, K n have different importances (weights) for a superdecision maker and/or experts, determine an importance wl of the criterion K l , for example, using ∑ n one of the procedures [18, 19]. Weights of criteria can be normalized wl = 1. as follows: l=1 30 . When experts have different competencies and/or influences, introduce a competency indicator c of the expert s, for example, based on results of a mutual evaluation of experts or an opinion of a super-decision maker [18]. Indicators of ∑ expert competencies can be normalized as follows: ts=1 c = 1. 40 . Present a version of the object Oi evaluated by the expert s as a multiset ( ) { ( 1) x1h 1 ◦ x1h 1 ; . . . ; x1 ◦ x11 , . . . , k Ai = k Ai Ai ( ) ( ) } k xn1 ◦ xn1 , . . . , k xnh n ◦ xnh n Ai Ai over the set X = {x 1 ,…,{ x h } of attributes x e into the}form (3.1) or over the set X = X 1 ∪ . . . ∪ X n = x11 , . . . , x1h 1 , . . . ; xn1 , . . . , xnh n of estimate grades xlel on criteria scales K 1 ,…, K n into the form (3.2). The multiset Ai characterizes an individual assessment of the object Oi given by the expert s upon criteria K 1 ,…, K n . The multiplicity k Ai (xlel ) = 1 if the expert s gave the estimate xlel ∈ X l , el = 1,…,

84

4 Group Classification of Multi-attribute Objects

hl upon the criterion K l to the object Oi , and k Ai (xlel ) = 0 if the estimate xlel was not marked. 50 . Present the object Oi evaluated by all experts as a multiset Ai =

t ∑

( ) { ( ) c Ai = k Ai x11 ◦ x11 , . . . , k Ai x1h 1 ◦ x1h 1 ; . . . ;

s=1

( ) ( ) } k Ai xn1 ◦ xn1 , . . . , k Ai xnh n ◦ xnh n .

The multiset Ai characterizes an aggregated assessment of the object Oi and is a

of individual weighted sum of( multisets ( el ) expert estimates of the object Oi . The ∑ tAi el ) xl is equal to a weighted sum of numbers multiplicity k Ai xl = s=1 c k Ai of experts, who has the competence c and gave the estimate xlel ∈ X l upon the criterion K l to the object Oi . 60 . Allocate objects O1 ,..., Om into g clusters D1 ,..., Dg . 70 . Reallocate objects O1 ,…, Om into the clusters D1 ,…, Dg in accordance with a certain rule. For example, place the object Oi into the nearest cluster Dh specified by the condition d(Oi , Dh ) = minf d(Oi , Df ), where objects Oi , i = 1,…, m and clusters Df , f = 1,…, g are represented by multisets Ai , C f of the form (3.1) or (3.2). Or place the object Oi into the cluster Dr with the nearest center Or ° that is determined by the condition d(Oi , Or °) = minf d(Oi , Of °). The center Of ° of the cluster Df can be found as a solution to the problem ∑ ( ) ) ( J O ◦f , D f = min d Oi , O p . p

(4.4)

i

The functional J(Of °, Df ) characterizes a “cumulation” of the objects’ allocation in an attribute space and is close to the criterion J(Dr *) of cluster compactness (4.3). Remark that the cluster center Of ° may coincide with one of the real objects Oi or be a “phantom” object that is absent in a collection of real objects, but is constructed from attributes x e or xlel as a multiset Af ° describing the center Of °. 80 . If all objects O1 ,..., Om do not change their initial membership after repartition over the clusters D1 ,..., Dg , then output the result and stop. Otherwise, go back to step 70 . The result of object classification is evaluated by a quality of partition. The best partition can be found, in particular, as a solution to the following optimization problem: ∑ ( ) J O ◦f , D f → min,

(4.5)

f

where J(Of °, Df ) is defined, for example, by formula (4.4). In the general case, the problem of evaluating the partition quality is not uniquely solved, since functional (4.5) can have many local extrema. The final result will also depend on the initial (close or far from optimal) partition of objects into classes.

4.2 Demonstrative Example: Method CLAVA-HI

85

Without particular difficulties, one can apply the described clustering methods CLAVA-HI and CLAVA-NI for group nominal or ordinal classification of objects, represented by many quantitative and/or qualitative attributes, to cases where objects exist in several versions (copies, exemplars). Classes can also be formed not by the difference, but by the similarity of objects specified with one of formulas (3.8)–(3.12). Then, instead of the condition min d(Oi , Oj ) that is the minimal difference between objects, one must use the condition max s(Oi , Oj ) that is the maximal similarity of objects. In practical problems, the following technique for structuring a collection of multi-attribute objects can be useful. At first, several possible partitions of objects are formed, applying hierarchical clustering. Then the most acceptable partition of objects is found by specifying subclasses of objects within partitions, using non-hierarchical clustering or ordering.

4.2 Demonstrative Example: Method CLAVA-HI Using the CLAVA-HI method, let us classify, without teachers, ten objects (pupils) O1 ,…, O10 , which are evaluated by two experts (per two semesters) upon eight qualitative criteria (the studied subjects) K 1 ,…, K 8 . All criteria have the same fivepoint rating scale X = {x 1 , x 2 , x 3 , x 4 , x 5 }, where x 1 is 1/very bad, x 2 is 2/bad, x 3 is 3/satisfactory, x 4 is 4/good, x 5 is 5/excellent. Estimate grades are ordered by preference as x 1 ≺ x 2 ≺ x 3 ≺ x 4 ≺ x 5 . All experts are equally competent c = 1, and an importance of criteria is the same wl = 1 for all experts. Represent multi-attribute objects Oi , which exist in two versions Oi , Oi , i = 1,…, 10, by multisets Ai = Ai + Ai of expert estimates (3.17). Multiplicities of multisets Ai , Ai describing the versions Oi , Oi are given in the data matrix G< > (Table 3.4). As a measure of difference d(Oi , Oj ) between objects/clusters, we take one of the metrics (3.17), namely: 5 | ( e) ( ) ∑ ( )| |k Ai x − k A j x e |, d11 Oi , O j =

(4.6)

e=1

where all experts are equally competent and all criteria are equally important. Calculate distances d(Oi , Oj ) between all pairs of objects Oi , Oj , i, j = 1,…, 10. The closest objects are O4 and O8 located in a multiset space at the distance d(O4 , O8 ) = |0 − 0| + |0 − 0| + |0 − 0| + |0 − 0| + |0 − 0| = 0. Combine the objects O4 and O8 into a cluster D01 that corresponds to a multiset C 01 formed by addition of multisets:

86

4 Group Classification of Multi-attribute Objects

C 01 = A4 + A8 = {0 ◦ x1 , }2 ◦ x{2 , 6 ◦ x3 , 14 ◦ x4 , 10 ◦ x5 } { } = 0 ◦ x 1, 1 ◦ x 2, 3 ◦ x 3, 7 ◦ x 4, 5 ◦ x 5 + 0 ◦ x 1, 1 ◦ x 2, 3 ◦ x 3, 7 ◦ x 4, 5 ◦ x 5 . The objects O4 , O8 are removed from further consideration. The number of objects/clusters is diminished by 1. Calculate distances between all pairs of the remaining objects and pairs of object/cluster D01 . At this step, the closest objects will be two pairs O1 , O6 and O2 , O7 located in a multiset space at the same distances: d(O1 , O6 ) = |0 − 0| + |0 − 0| + |0 − 0| + |7 − 9| + |9 − 7| = 4, d(O2 , O7 ) = |4 − 3| + |6 − 5| + |4 − 5| + |2 − 3| + |0 − 0| = 4. The objects O1 , O6 form a cluster D02 represented by a multiset { } C 02 = A1 + A6 = 0 ◦ x 1 , 0 ◦ x 2 , 0 ◦ x 3 , 16 ◦ x 4 , 16 ◦ x 5 = } { = 0 ◦ x 1, 0 ◦ x 2, 0 ◦ x 3, 7 ◦ x 4, 9 ◦ x 5 } { + 0 ◦ x 1, 0 ◦ x 2, 0 ◦ x 3, 9 ◦ x 4, 7 ◦ x 5 , and the objects O2 , O7 form a cluster D03 represented by a multiset } { C 03 = A2 + A7 = 7 ◦ x 1 , 11 ◦ x 2 , 9 ◦ x 3 , 5 ◦ x 4 , 0 ◦ x 5 = } { = 4 ◦ x 1, 6 ◦ x 2, 4 ◦ x 3, 2 ◦ x 4, 0 ◦ x 5 } { + 3 ◦ x 1, 5 ◦ x 2, 5 ◦ x 3, 3 ◦ x 4, 0 ◦ x 5 . The objects O1 , O6 and O2 , O7 are removed from further consideration. The number of objects/clusters is diminished by 2. Calculate distances between all pairs of the remaining objects and the formed clusters D01 , D02 , D03 . At this step, the closest objects are O5 and O10 located in a multiset space at the distance d(O5 , O10 ) = |0 − 0| + |0 − 2| + |1 − 3| + |11 − 6| + |4 − 5| = 10. Combine the objects O5 , O10 into a cluster D04 described by a multiset } { C 04 = A5 + A10 = 0 ◦ x 1 , 2 ◦ x 2 , 4 ◦ x 3 , 17 ◦ x 4 , 9 ◦ x 5 = } { = 0 ◦ x 1 , 0 ◦ x 2 , 1 ◦ x 3 , 11 ◦ x 4 , 4 ◦ x 5 } { + 0 ◦ x 1, 2 ◦ x 2, 3 ◦ x 3, 6 ◦ x 4, 5 ◦ x 5 . The objects O5 , O10 are removed from further consideration. The number of objects/clusters is diminished by 1.

4.2 Demonstrative Example: Method CLAVA-HI

87

Calculate distances between all pairs of the remaining objects and the formed clusters. At this step, the closest clusters are D01 and D04 located in a multiset space at the distance d(D01 , D04 ) = |0 − 0| + |2 − 2| + |6 − 4| + |14 − 17| + |10 − 9| = 6. Combine the clusters D01 , D04 into a cluster D05 described by a multiset } { C05 =C01 + C04 = 0 ◦ x 1 , 4 ◦ x 2 , 10 ◦ x 3 , 31 ◦ x 4 , 19 ◦ x 5 = } { = 0 ◦ x 1 , 2 ◦ x 2 , 6 ◦ x 3 , 14 ◦ x 4 , 10 ◦ x 5 } { + 0 ◦ x 1 , 2 ◦ x 2 , 4 ◦ x 3 , 17 ◦ x 4 , 9 ◦ x 5 . The clusters D01 , D04 are removed from further consideration. The number of objects/clusters is diminished by 1. Calculating sequentially step by step distances between all pairs of objects/clusters and choosing the closest pairs at each step, we obtain clusters D06 , D07 , D08 represented by multisets: d(O9 , D03 ) = 16, C 06 = A9 + C 03 } { = 8 ◦ x 1 , 18 ◦ x 2 , 16 ◦ x 3 , 6 ◦ x 4 , 0 ◦ x 5 = } { = 1 ◦ x 1, 7 ◦ x 2, 7 ◦ x 3, 1 ◦ x 4, 0 ◦ x 5 } { + 7 ◦ x 1 , 11 ◦ x 2 , 9 ◦ x 3 , 5 ◦ x 4 , 0 ◦ x 5 ; d(D02 , D05 ) = 32, C 07 = C 02 + C 05 } { = 0 ◦ x 1 , 4 ◦ x 2 , 10 ◦ x 3 , 47 ◦ x 4 , 35 ◦ x 5 = } { = 0 ◦ x 1 , 0 ◦ x 2 , 0 ◦ x 3 , 16 ◦ x 4 , 16 ◦ x 5 } { + 0 ◦ x 1 , 4 ◦ x 2 , 10 ◦ x 3 , 31 ◦ x 4 , 19 ◦ x 5 ; d(O3 , D06 ) = 34, C 08 = A3 + C 06 } { = 16 ◦ x 1 , 20 ◦ x 2 , 19 ◦ x 3 , 8 ◦ x 4 , 1 ◦ x 5 = } { = 8 ◦ x 1, 2 ◦ x 2, 3 ◦ x 3, 2 ◦ x 4, 1 ◦ x 5 } { + 8 ◦ x 1 , 18 ◦ x 2 , 16 ◦ x 3 , 6 ◦ x 4 , 0 ◦ x 5 . A procedure terminates when two clusters D07 and D08 remain. The cluster D07 includes the objects O4 , O8 , O1 , O6 , O5 , O10 , which have generally the ‘high’ marks x 4 , x 5 . The cluster D08 contains the objects O2 , O7 , O9 , O3 , which have generally the ‘low’ marks x 1 , x 2 , x 3 . A procedure can also be stopped earlier if we restrict to some small distance, for example, equal to 16. Then there are three clusters D02 , D05 , D06

88

4 Group Classification of Multi-attribute Objects

and one object O3 . The clusters D02 and D05 include the same objects as the cluster D07 . The cluster D06 contains the objects O2 , O7 , O9 . Let us now present individual classifications of objects O1 ,…, O10 of the experts 1 and 2, which are built by the same method CLAVA-HI for hierarchical clustering. Calculate sequentially step by step distances between all pairs of objects/clusters, evaluated by the expert 1, and choose the closest pairs at each step. Then we obtain clusters D11 , D12 , D13 , D14 , D15 , D16 , D17 , D18 represented by the following multisets: ) ( d O1 , O6 = 0,

{ } + A = 0 ◦ x 1, 0 ◦ x 2, 0 ◦ x 3, 8 ◦ x 4, 8 ◦ x 5 ; C 11 = A 1 6

( ) d O4 , O8 = 2,

} { = 0 ◦ x 1, 2 ◦ x 2, 3 ◦ x 3, 6 ◦ x 4, 5 ◦ x 5 ; C 12 = A4 + A 8

) ( d O7 , O9 = 4,

} { + A = 3 ◦ x 1, 5 ◦ x 2, 7 ◦ x 3, 1 ◦ x 4, 0 ◦ x 5 ; C 13 = A 7 9

) ( d O2 , O3 = 8,

{ } + A = 7 ◦ x 1, 4 ◦ x 2, 2 ◦ x 3, 3 ◦ x 4, 0 ◦ x 5 ; C 14 = A 2 3 ) ( d O5 , D11 = 8,

} { + C 11 = 0 ◦ x 1 , 0 ◦ x 2 , 0 ◦ x 3 , 15 ◦ x 4 , 9 ◦ x 5 ; C 15 = A 5

) ( d O10 , D12 = 8,

} { 1 2 3 4 5 C 16 = A 10 + C 12 = 0 ◦ x , 2 ◦ x , 4 ◦ x , 8 ◦ x , 10 ◦ x ; d(D13 , D14 ) = 12,

} { C 17 = C 13 + C 14 = 10 ◦ x 1 , 9 ◦ x 2 , 9 ◦ x 3 , 4 ◦ x 4 , 0 ◦ x 5 ;

d(D15 , D16 ) = 14,

} { C 18 = C 15 + C 16 = 0 ◦ x 1 , 2 ◦ x 2 , 4 ◦ x 3 , 23 ◦ x 4 , 19 ◦ x 5 .

A procedure terminates when two clusters D17 and D18 remain. The cluster D18 includes the objects O1 , O6 , O5 , O4 , O8 , O10 , which have generally the ‘high’ marks x 4 , x 5 . The cluster D17 contains the objects O7 , O9 , O2 , O3 , which have generally the ‘low’ marks x 1 , x 2 , x 3 . Calculating in a similar way distances between all pairs of objects/clusters, evaluated by the expert 2, and choosing the closest pairs at each step, we obtain clusters

4.2 Demonstrative Example: Method CLAVA-HI

89

D21 , D22 , D23 , D24 , D25 , D26 , D27 , D28 represented by multisets: ) ( d O5 , O8 = 0,

} { + A = 0 ◦ x 1, 0 ◦ x 2, 2 ◦ x 3, 8 ◦ x 4, 6 ◦ x 5 ; C 21 = A 5 8

) ( d O1 , O6 = 4,

{ } + A = 0 ◦ x 1, 0 ◦ x 2, 0 ◦ x 3, 8 ◦ x 4, 8 ◦ x 5 ; C 22 = A 1 6

) (

= 4, d O4 , O10

} { + A = 0 ◦ x 1, 2 ◦ x 2, 4 ◦ x 3, 8 ◦ x 4, 2 ◦ x 5 ; C 23 = A 4 10

) ( d O7 , O9 = 4,

} { + A = 1 ◦ x 1, 7 ◦ x 2, 5 ◦ x 3, 3 ◦ x 4, 0 ◦ x 5 ; C 24 = A 7 9

( ) d O2 , O3 = 4,

} { + A = 5 ◦ x 1, 4 ◦ x 2, 5 ◦ x 3, 1 ◦ x 4, 1 ◦ x 5 ; C 25 = A 2 3 d(D21 , D22 ) = 4,

} { C 26 = C 21 + C 22 = 0 ◦ x 1 , 0 ◦ x 2 , 2 ◦ x 3 , 16 ◦ x 4 , 14 ◦ x 5 ;

d(D24 , D25 ) = 10,

} { C 27 = C 24 + C 25 = 6 ◦ x 1 , 11 ◦ x 2 , 10 ◦ x 3 , 4 ◦ x 4 , 1 ◦ x 5 ; d(D23 , D26 ) = 24,

{ } C 28 = C 23 + C 26 = 0 ◦ x 1 , 2 ◦ x 2 , 6 ◦ x 3 , 24 ◦ x 4 , 16 ◦ x 5 .

A procedure can be stopped if we restrict to some small distance, for example, equal to 12. Then there are three clusters D26 , D23 , D27 . The clusters D26 and D23 include, respectively, the objects O1 , O6 , O5 , O8 and O4 , O10 , which have generally the ‘high’ marks x 4 , x 5 . The cluster D27 contains the objects O7 , O9 , O2 , O3 , which have generally the ‘low’ marks x 1 , x 2 , x 3 . The hierarchical tree depicting the result of group classification of objects is shown in Fig. 4.1. The trees depicting the results of individual classifications of objects by assessments of the experts 1 and 2 are shown in Figs. 4.2 and 4.3, respectively. From the figures we can see clearly that the individual classifications of objects are somewhat different from each other, but coincide, in general, with the group classification.

90

O1 O6 O4 O8 O5 O10 O2 O7 O9 O3 d11 0

4 Group Classification of Multi-attribute Objects

D02 D07 D01

D05 D04 D03 D06 4

8

12

16

20

24

28

32

D08 36

28

32

28

32

Fig. 4.1 Group classification of objects by assessments of two experts

O1 O6 O5 O4 O8 O10 O7 O9 O2 O3 d11 0

D11 D15 D12

D18 D16 D13 D17 D14

4

8

12

16

20

24

Fig. 4.2 Individual classification of objects by assessments of the expert 1

O1 O6 O5 O8 O4 O10 O7 O9 O2 O3 d11 0

D22 D26

D28

D21 D23 D24 D27 D25 4

8

12

16

20

24

Fig. 4.3 Individual classification of objects by assessments of the expert 2

4.2 Demonstrative Example: Method CLAVA-HI

91

Present the results of clustering multi-attribute objects in the following schematic form where close objects are enclosed in round brackets, and clusters are enclosed in square brackets: classes by assessments of two experts

P Kgr ⇔ [(O4 , O8 ), (O1 , O6 ), (O5 , O10 )], [(O2 , O7 ), O9 , O3 ];

(4.7)

classes by assessments of the expert 1

P K ⇔ [((O1 , O6 ), O5 ), ((O4 , O8 ), O10 )], [(O7 , O9 ), (O2 , O3 )];

(4.8)

classes by assessments of the expert 2

P ⇔ [(O5 , O8 ), ((O1 , O6 ), (O4 , O10 ))], [((O7 , O9 ), (O2 , O3 ))]. K

(4.9)

The results for group classification of multi-attribute objects obtained with the CLAVA-HI method can also be written in natural language into the form of decision rules as follows: If an object has the marks ‘4/good’ or ‘5/excellent’, then an object belongs to the class (cluster) D07 ‘Good objects’; If an object has the marks ‘1/very bad’, ‘2/bad’ or ‘3/satisfactory’, then an object belongs to the class (cluster) D08 ‘Bad objects’. When concluding these decision rules, we nowhere used information about a nature of attributes describing objects. Namely, whether these attributes are numerical, symbolic or verbal, ordered or not. If attributes are not ordered, then the built classifications of objects are nominal. If attributes are ordered by preference, for instance, as x 1 ≺ x 2 ≺ x 3 ≺ x 4 ≺ x 5 ,, then the built classifications of objects are ordinal, and the class ‘Good objects’ will be preferable to the class ‘Bad objects’. The group ordinal classification of multi-attribute objects P K gr (4.7) according to the CLAVA-HI method coincides, in general, with the group rankings of objects R P gr (3.13) and R A+ gr (3.18) obtained by the RAMPA and ARAMIS methods. The best objects by the aggregated assessments of two experts, as well as by the individual assessments of each expert, are O1 and O6 , which are included in the same cluster, the worst objects are O2 and O3 . It is very important to note that the individual classification PK (4.8) of multi-attribute objects, represented by multisets of verbal estimates of the expert 1, completely coincides with the result given in [2]. In this book, each object Oi , i e1 e8 ,…, xi8 ) of numerical estimates, whose = 1,…, m was specified by vector xi = (xi1 el components xil , l = 1,…, n took the values 1, 2, 3, 4, 5 (Table 3.1), and clustering was carried out in a different way using the summation algorithm. These two absolutely different methods identified the classes of close objects equally successfully, which is evidence of effectiveness and reliability of our approach. In the CLAVA-HI and CLAVA-NI methods, all multisets representing multiattribute objects, as well as distances (4.6) between objects, are formed using only

92

4 Group Classification of Multi-attribute Objects

summation of estimates by experts and upon criteria. Therefore, compositions of clusters formed in accordance with rules (4.2)–(4.5), including clusters (4.7)–(4.9), do not depend on the order of estimates’ summation. Thus, the CLAVA-HI and CLAVANI methods for group classification of objects by proximity of features satisfy the principle of invariance of individual preferences’ aggregation.

4.3 Group, with Teachers, Classification of Objects by Aggregated Decision Rules: Method MASKA In the task of collective classification with several teachers, it is required to allocate m multi-attribute objects O1 ,…, Om into the given classes (categories) D1 ,…, Dg , which differ in their properties. Objects are evaluated by t decision makers/experts upon n criteria K 1 ,…, K n with numerical and/or verbal scales X l = {x l 1 ,…, xlhl }, l = 1,…, n, and each expert preassigns an object to one of classes Df , f = 1,…, g. Thus, there are t different versions of each object Oi , i = 1,…, m and t individual expert rules for sorting of objects, usually not consistent with each other. Group classification of objects expresses collective preference/knowledge of members of a decision making group (DMG), which represents individual preferences/knowledge of actors into the aggregated form, and will be ordinal or nominal, depending on whether estimation grades are ordered or not on criteria scales. To solve the task of collective, with teachers, classification of multi-attribute objects, it is necessary to construct one or several fairly simple group rules: IF , THEN . These rules have to correspond most to individual rules of expert sorting and allow us to assign objects to the the given classes, taking into account possible inconsistency of expert assessments and decision rules. The method for multi-aspect consistent classification of alternatives (MASKA) is intended for a collective, with teachers, classification of objects, present in several versions, according to aggregated decision rules [12, 13, 17, 20–26]. The method is based on aggregation of individual preferences of several experts, which are expressed by object assessments upon many criteria with numerical and/or verbal scales, individual rules for sorting of objects and are represented by multisets. The method allows: • to construct one or more the generalized group rules for classifying objects that aggregate distinguished individual assessments and rules for sorting of objects, and find consistently classified objects; • to discover or identify contradictory individual classification rules and find misclassified objects. The main idea of finding a fairly simple group rule for classification of multiattribute objects, which aggregates different individual sorting rules of many experts in the best way, is as follows.

4.3 Group, with Teachers, Classification …

93

Let us represent multi-attribute objects O1 ,…, Om and their versions as multisets into the form (3.2), generated by the extended set X ' = X 1 ∪. . .∪ X n ∪ R of attributes. The set X ' combines subsets X l = {x l 1 ,…, xlhl }, l = 1,…, n of meaningful attributes, which are gradations on the scales of criteria K 1 ,…, K n , and a subset R = {r 1 ,…, r g } of sorting attributes, where r f characterizes an object membership to the class Df , f = 1,…, g. The votes’ majority rule for assigning each object Oi to one of the decision classes D1 ,…, Dg is formulated on the basis of some considerations taking into account individual opinions of experts. For example, an object Oi assigns to the class Df if a number k Ai (r f ) of experts with such an opinion is greater than a half of the total number t of experts that is k Ai (r f ) > t/2, or is greater than a numbers k Ai((r q )) of experts with opinions that is k Ai (r f ) > k Ai (r q ) for all q /= f , or k Ai r f > ( different ) ∑ r k . According to the accepted rule of votes’ majority, all objects are Ai q q/= f allocated into classes D1 ,…, Dg , which are represented by multisets that are sums of multisets describing objects O1 ,…, Om . For simplicity of calculations and convenience of interpretation of the final results, it is advisable to go from partition of object collection into g classes to partition into only two classes: Da (objects of sort a) and Db (objects of sort b), combining for this the given classes D1 ,…, Dg into classes Da or Db . A membership of the object Oi to the decision class Da or Db is characterized by the total number of experts’ votes, which is obtained by adding votes, initially cast by experts for assigning objects to certain initial classes that are included, respectively, in the classes Da or Db . Objects’ decomposition namely into two classes is not a fundamental limitation of the method. If it is necessary to allocate objects into more classes, we can firstly partition all objects into two classes, then partition one of classes or both of them into subclasses, and so on. For example, if it is required to select and sort competitive projects, then first these projects can be divided into accepted and rejected, then the accepted projects can be divided into unconditionally and conditionally supported, and the rejected projects can be divided into conditionally and unconditionally rejected, and so on. For object allocation into two classes Da and Db , the rule of votes’ majority is generalized. We introduce a set Y = {ya , yb } of new attributes that characterizes an object membership to the classes Da and Db , and construct two types of new multisets over the set Y. New multisets of the first type Ra and Rb specify two subgroups (granules) of sorting attributes, which correspond to decision rules for sorting objects and determine a decomposition of objects into the classes Da and Db that is the best for the existing individual rules of expert sorting. New multisets of the second type Qla and Qlb specify l subgroups of meaningful attributes, which correspond to the features of objects characterized by the criteria K 1 ,…, K n . For each l-th subgroup of meaningful attributes, using new multisets of the second type, it is necessary to find such subgroups (granules) X la and X lb of attributes that will determine a decomposition of objects into the classes Da and Db , which is most consistent with individual rules of expert sorting. The meaningful attributes, including in the found subgroups and describing the classes of objects, will be called classifying attributes of these classes. Classifying attributes that provide an

94

4 Group Classification of Multi-attribute Objects

acceptable allocation of objects into the classes Da and Db , comparable to decomposition by sorting attributes, are included in the generalized collective decision rules for classifying objects. Various sets of subgroups of classifying attributes generate the generalized rules for classifying objects with different approximation accuracy. Among the objects assigned to the given classes in accordance with the generalized decision rules, there can be both correctly and incorrectly classified objects. To improve the classification, combinations of various classifying attributes are considered and are selected those of them that provide the maximum difference between numbers of correctly and incorrectly classified objects. Namely these attributes are included in the improved group rules for classification, according to which the final object membership to corrected classes is established, and the contradictory classified objects are also identified. The MASKA method includes two algorithms A and B that are executed sequentially. Algorithm A. Search for classifying attributes and construction of generalized group rules for objects’ partition into classes. 10 . Each expert s, s = 1,…, t evaluates all objects O1 ,…, Om upon each criterion K l that has a raiting scale X l = {x l 1 ,…, xlhl }, l = 1,…, n of quantitative or qualitative estimates, and assigns each object Oi to one of the given classes D1 ,…Dg . Ordinal scales of criteria are considered to be ordered from more preferable to less preferable gradations, for example, as xl1 . xl2 . . . . . xlhl . 20 . When criteria K 1 ,…, K n have different importances (weights) for a superdecision maker and/or experts, determine an importance wl of the criterion K l , for example, using ∑ n one of the procedures [18, 19]. Weights of criteria can be normalized wl = 1. as follows: l=1 30 . When experts have different competencies and/or influences, introduce a competency indicator c of the expert s, for example, based on results of a mutual evaluation of experts or an opinion of a super-decision ∑ maker [18]. Indicators of expert competencies can be normalized as follows: ts=1 c = 1. 40 . Present a version of the object Oi evaluated by the expert s as a multiset ( ) { ( 1)

x1h 1 ◦ x1h 1 ; . . . ; x1 ◦ x11 , . . . , k Ai = k Ai Ai ( 1) ( hn ) ( ) }

k xn ◦ xn1 , . . . , k xn ◦ xnh n ; k Ai Ai A i (r 1 ) ◦ r 1 , . . . , k A i r g ◦ r g (4.10) over the extended set X ' = X 1 ∪ . . . ∪ X n ∪ R of attributes. The set X ' includes subsets X l = {x l 1 ,…, xlhl }, l = 1,…, n of meaningful estimates upon criteria K l and a subset R = {r 1 ,…, r g } of sorting attributes. The multiplicity k Ai (xlel ) = 1 if the expert s gave the estimate xlel ∈ X l , el =1,…, hl upon the criterion K l to the object Oi , and k Ai (xlel ) = 0 if the estimate xlel was not marked. The multiplicity k Ai (r f ) = 1 if the object Oi belongs to the class Df , f = 1,…, g, according to an opinion of the expert s, and k Ai (r f ) = 0, otherwise.

4.3 Group, with Teachers, Classification …

95

Specifying the multi-attribute object Oi into the form of a multiset Ai (4.10) can also be considered as an individual decision rule of the expert s: IF , THEN . This rule is associated with multiset arguments in the following way. The term includes meaningful expert estimates of the object Oi upon the criteria K 1 ,…, K n , which describe the object features. The term contains the sorting attribute r f , which reflects an opinion of the expert s about a membership of the object Oi to the class Df . 50 . Present the object Oi evaluated by all experts as a multiset ( ) { ( ) t ∑ Ai = c Ai = k Ai x11 ◦ x11 , . . . , k Ai x1h 1 ◦ x1h 1 ; . . . ; ( s=1 ) ( ) ( ) } k Ai xn1 ◦ xn1 , . . . , k Ai xnh n ◦ xnh n ; k Ai (r1 ) ◦ r1 , . . . , k Ai r g ◦ r g

(4.11)

over the extended set X ' = X 1 ∪ . . . ∪ X n ∪ R of meaningful and sorting attributes. The multiset Ai characterizes an aggregated assessment of the object Oi and is a

(4.10) of individual weighted sum of multisets ( el A ) i ∑ ( el ) expert estimates of the object xl is equal to a weighted number Oi . The multiplicity k Ai xl = ts=1 c k Ai el ∈ X l , el = 1,…, of experts, who has the competence c and gave the estimate ( ) xl ∑ ( h)l upon the criterion K l to the object Oi . The multiplicity k Ai r f = ts=1 c k rf Ai is equal to a weighted number of experts, who made the conclusion r f about a membership of the object Oi to the class Df . Assessments and conclusions of different experts can be identical, distinguished, contradictory. This inconsistency characterizes a subjectivity of expert judgments, which cannot be considered accidental errors, but express individual points of view of experts. Specifying the multi-attribute object Oi into the form of a multiset Ai (4.11) can also be considered as a collective decision rule: IF , THEN , which integrates individual rules for expert sortings of objects into g classes. This rule is associated with multiset arguments in the following way. The term includes a set of meaningful expert estimates xlel of the object Oi upon the criteria K 1 ,…, K n , which describe the object features. The term contains a set of sorting attributes r 1 ,…, r g , which reflect opinions of several experts about a membership of the object Oi to classes D1 ,…, Dg . ' The collection of objects O1 ,…, m and the set X of attributes are specified by the || O|| || ' || ' , h = h1 + … + hn , which is called an matrix ‘Object-Attributes’ H = ||ki j || m×(h+g)

information table or decision table and is an extended matrix H. Rows of the matrix H' are multisets Ai describing objects Oi , i = 1,…, m. Columns present attributes x j ' ∈ X ' , j = 1,…, h + g, which are gradations xlel ∈ X l , el = 1,…, hl of meaningful estimates on rating scales of criteria K l or sorting attributes r 1 ,…, r g . Elements k ij ' are multiplicities k Ai (x j ' ) of corresponding attributes x j ' . 60 . Form the classes Da (sort a) and Db (sort b) of objects by combining certain classes from the collection D1 ,…, Dg in accordance with some rules of votes’ majority For example, state the following group rules for assigning objects to classes Da and Db :

96

4 Group Classification of Multi-attribute Objects

Object

(4.12)

Object

(4.13)

where F a , F b are subsets of indices of classes included respectively in the classes Da and Db , Fa ∩ Fb = ∅, Fa ∪ Fb = {1,…, g}. Then 1 ,…, r g{} of } R = {r∑ } sorting ∑ the{subset attributes is divided into two subsets Ra = f ∈Fa r f , Rb = f ∈Fb r f , which determine a membership of the object Oi to the class Da or Db . 70 . Present the object Oi evaluated by all experts as a multiset ( ) { ( ) B i = k Bi x11 ◦ x11 , . . . , k Bi x1h 1 ◦ x1h 1 ; . . . ; ( ) ( ) } k Bi xn1 ◦ xn1 , . . . , k Bi xnh n ◦ xnh n ; k Bi (ra ) ◦ ra , k Bi (rb ) ◦ rb

(4.14)

over the extended set X ' = X 1 ∪ . . . ∪ X n ∪ Ra ∪ Rb of meaningful ∑ and sorting attributes. Here the multiplicities k Bi (xlel ) = k Ai (xlel ); k Bi (r p ) = f ∈Fp k Ai (r f ), p = a, b. Specifying the multi-attribute object Oi into the form of a multiset Bi (4.14) can also be considered as a collective decision rule: IF , THEN , which integrates individual rules for expert sortings of objects into two clases. This rule is associated with multiset arguments in the following way. The term includes a set of meaningful expert estimates xlel of the object Oi upon the criteria K 1 ,…, K n , which describe the object features. The term contains aggregated sorting attributes r a , r b , which reflect collective opinions of experts on a membership of object Oi to the class Da or Db . ' The collection of objects O1 ,…, O ||m and || the set X of attributes are specified by || ' || ' , h = h1 + … + hn , which is also the matrix ‘Object-Attributes’ H = ||ki j || m×(h+2)

called an information table or decision table. Rows of the matrix H' are multisets Bi describing objects Oi , i = 1,…, m. Columns present attributes x j ' ∈ X ' , j = 1,…, h + 2, which are gradations xlel ∈ X l , el = 1,…, hl of meaningful estimates on rating scales of criteria K l or sorting attributes r a , r b . Elements k ij ' are multiplicities k Bi (x j ' ) of corresponding attributes x j ' . 80 . Present the class Dp , p = a, b as a multiset Cp =



( ) { ( ) B i = kC p x11 ◦ x11 , . . . , k C p x1h 1 ◦ x1h 1 ; . . . ;

i∈I p

( ) ( ) } k C p xn1 ◦ xn1 , . . . , k C p xnh n ◦ xnh n ; k C p (ra ) ◦ ra , k C p (rb ) ◦ rb

(4.15)

over the extended set X ' = X 1 ∪ . . . ∪ X n ∪ Ra ∪ Rb of meaningful and sorting attributes. The multiset C p is a sum of multisets Bi (4.14) of aggregated ∑ expert estimates of objects forming the class Dp . The multiplicities k Cp (xlel ) = i∈I p k Bi (xlel );

4.3 Group, with Teachers, Classification …

97

∑ ∑ k Cp (r a ) = i∈I p k Bi (r a ), k Cp (r b ) = i∈I p k Bi (r b ), p = a, b; I a , I b are subsets of indices of objects included into classes Da and Db , Ia ∩ Ib = ∅, Ia ∩ Ib = {1, . . . , m}. Specifying the class Dp into the form of a multiset C p (4.15) can also be considered as a collective decision rule: IF , THEN , which combines individual rules for expert sorting of objects. This rule is associated with multiset arguments in the following way. The term includes combinations of meaningful expert estimates xlel of objects upon the criteria K 1 ,…, K n , which describe the class properties. The term contains aggregated sorting attributes r a , r b , which summarizes individual opinions of experts about a membership of objects to the class Da or Db in accordance with the stated rule of votes’ majority. ' The classes Da , Db of objects || ||and the set X of attributes are specified by the matrix || ' || which we shall call an aggregated decision ‘Classes-Attributes’ L = ||k pj || 2×(h+2)

table. Two rows of the matrix L are the multisets C a and C b describing the classes Da and Db . Columns present attributes x j ' ∈ X ' , j = 1,…, h + 2, which are gradations xlel ∈ X l , el = 1,…, hl of meaningful estimates on rating scales of criteria K l or sorting attributes r a , r b . Elements k pj ' are multiplicities k Cp (x j ' ) of corresponding attributes xj ' . −1 0 || 9 ||. Construct an inverted aggregated decision table—an inverse matrix L = || ' || , which consists of h + 2 rows and 2 columns. Each row of the matrix ||k j p || (h+2)×2

L−1 represents one of the attributes x j ' ∈ X ' . Columns are the multisets C a i C b (4.15) describing classes Da and Db . Elements k jp ' are multiplicities k Cp (x j ' ) of corresponding attributes x j ' , which are sorting attributes r a , r b or gradations meaningful estimates xlel ∈ X l , el = 1,…, hl on rating scales of criteria K l . 100 . Introduce a set Y = {ya , yb } of new attributes, elements of which characterize a membership of objects to the classes Da and Db . Let us associate rows of the matrix L−1 with new multisets } { Q lel = k Qlel (ya ) ◦ ya , k Qlel (yb ) ◦ yb ,

} { Rq = k Rq (ya ) ◦ ya , k Rq (yb ) ◦ yb

over the set Y = {ya , yb }. Here the multiplicities k Qle l (yp ) = k Cp (xlel ), el = 1,…, hl ; k Rq (yp ) = k Cp (r q ), q, p = a, b. We shall call the multisets Q lel substantial and the multisets Rq categorical. Multisets Q lel and Rq will be considered as points of the Petrovsky multiset metric space (P , d), where Q = {Q11 ,…, Q 1h 1 ;…; Qn1 ,…, Q nh n ; Ra , Rb }, d is one of the metrics (3.7). The categorical multisets Ra and Rb are located at the greatest distance d* = d(Ra , Rb ) in the space (Q , d) and define a partition of the object collection O1 ,…, Om into the classes Da and Db , which is the best decomposition for the given individual rules of expert sorting. When there are no contradictions between the individual expert rules, the distance d* = d(Ra , Rb ) is equal to d 11 * = mt, d 21 * = 1/(h + 2), d 31 * = 1, respectively. 0 ∑ 11 . For each criterion ∑ K l , l = 1,…, n, generate substantial multisets Qla = Q i Q = lb le el ∈Ela el ∈Elb Q lel , where subsets E la and E lb of indices are deterl mined by the classes Da and Db according to the majority rule, Ela ∩ Elb =

98

4 Group Classification of Multi-attribute Objects

∅, Ela ∩ Elb = {1, . . . , h l }. In the l-th attribute group, the multisets Qla and Qlb define different partitions of objects O1 ,…, Om into the classes Da and Db for the stated individual rules ∪ of expert A multiset Qlp corresponds to a certain { sorting. } subset (granule) X lp = el ∈Elp xlel , p = a, b of meaningful attributes. 120 . For each criterion K l , l = 1,…, n, find substantial multisets Qla * i Qlb *, which are located at the maximum distance in the metric space (Q, d) and are a solution to the optimization problem: ) ( ) ( ∗ ) ( ∗ d Q la , Q lb → max d Q la , Q lb = d Q la , Q lb

(4.16)

In the l-th group of meaningful attributes, the multisets Qla * and Qlb * define the best partition of objects O1 ,…, Om into the classes Da and Db . We will evaluate quality of objects’ decomposition into classes by a degree of approximation V l = d(Qla *, Qlb *)/ d(Ra , Rb ). Depending on a degree of approximation V l , we arrange multisets Qlp *, p = a, b by significance as Q ∗up . Q ∗vp . . . . . Q ∗zp . 130 . Determine an acceptable quality of objects’ decomposition into classes, setting the level of approximation V 0 , V l ≥ V 0 , l = u, v, w,… Generate the generalized group decision rules for allocation of objects O1 ,…, Om into the classes Da of more preferable objects and classe Db of less preferable objects in the multiset metric space (Q, d): (4.17) Object

(4.18) Object



Considering that a subset X lp * = el ∈Elp {xlel }, p = a, b of classifying attributes corresponds to each multiset Qlp *, the generalized group rules for classifying multiattribute objects can be rewritten in the following equivalent form: (4.19) Object

(4.20) Object

Subsets X up *, X vp *, X wp *,… of classifying attributes, which provide the acceptable degree of approximation V l ≥ V 0 , l = u, v, w,…, are arranged by significance as ∗ ∗ ∗ X up . X vp . X wp ... . When we apply simultaneously group sorting rules (4.12), (4.13) and generalized group decision rules (4.17)–(4.20), contradictions are possible in determining an object membership to any class, and classes may contain both correct and incorrect

4.3 Group, with Teachers, Classification …

99

classified objects. To improve the classification of objects, we use the following algorithm B. Algorithm B. Construction of group classification rules for identification of consistently and contradictory classified objects. ∗ ∗ ∗ 10 . Generate a subset X a∗ = X ua ∪ X va ∪ . . . ∪ X za of classifying attributes that determine an object assignment to the class Da . For each classifying attribute xlel ∈ X a * included in the generalized decision rule (4.19) and providing the highest degree of approximation maxl V l = V l *, find correctly and incorrectly classified objects. 20 . Select a classifying attribute xueu ∗ ∈ X a *, which provides the maximum difference N a − N ac between numbers N a of correctly and N ac of incorrectly classified objects. Include the attribute xueu ∗ into the improved group decision rule that determines an object membership to the corrected class Da \Dac of unconditionally preferable objects. 30 . Consider all combinations of the attribute xueu ∗ with other classifying attributes el xl ∈ X a * included in the generalized decision rule (4.19) for assigning an object to the class Da . Find correctly and incorrectly classified objects. 40 . Select two classifying attributes xueu ∗ ∈ X a *, xvev ∗ ∈ X a *, which provide the maximum difference N a − N ac between numbers N a of correctly and N ac of incorrectly classified objects. Include the attributes xueu ∗ , xvev ∗ into the improved group decision rule that determines an object membership to the corrected class Da \Dac . 50 . Consider all combinations a pair of the attribute xueu ∗ , xvev ∗ with other classifying attributes xlel ∈X a * included in the generalized decision rule (4.19). Find correctly and incorrectly classified objects. 60 . Select three, then four, and so on of the classifying attributes xueu ∗ ∈ X a *, xuev ∗ ∈ X a *, xuew ∗ ∈ X a *,…, which consequently, step by step, increase the difference N a − N ac between numbers N a of correctly and N ac of incorrectly classified objects. Include the attributes xueu ∗ , xuev ∗ , xuew ∗ ,… into the improved group decision rule that determines an object membership to the corrected class Da \Dac . 70 . Form the improved group rule for constructing the corrected class Da \Dac of unconditionally preferable objects: (4.21) Object ∗ ∗ ∗ 80 . Generate a subset X b∗ = X ub ∪ X vb ∪ . . . ∪ X zb of classifying attributes that determine an object assignment to the class Db . For each classifying attribute xlel ∈ X b * included in the generalized decision rule (4.20) and providing the highest degree of approximation maxl V l = V l *, find correctly and incorrectly classified objects. 90 . Select in a similar way the classifying attributes xueu ∗ ∈ X b *, xuev ∗ ∈ X b *, ew ∗ xu ∈ X b *, …, which consequently, step by step, increase the difference N b − N bc between numbers N b of correctly and N bc of incorrectly classified objects. Include the attributes xueu ∗ , xuev ∗ , xuew ∗ ,… into the improved group decision rule that determines an object membership to the corrected class Db \Dbc of unconditionally non-preferable objects.

100

4 Group Classification of Multi-attribute Objects

100 . Form the improved group rule for constructing the corrected class Db \Dbc of unconditionally non-preferable objects: (4.22) Object

110 . Form the group rule for constructing a class Dc = Dac ∪ Dbc of objects with contradictory individual sorting rules:

(4.23) Object

120 . If necessary, perform an additional analysis of incorrectly classified objects. Identify individual expert rules, which contain contradictions between assessments of the object for meaningful and sorting attributes that determines an object membership to a certain class. If possible, eliminate the causes of contradictions. The improved group rules (4.21), (4.22) for assigning an object to the corrected classes Da \Dac and Db \Dbc approximate the individual sorting rules of many experts and, generally speaking, differ. The rules (4.19)–(4.23) for the group classification of multi-attribute objects can be easily written in natural language using the verbal formulations of classifying attributes. Application of the MASKA method is greatly simplified when classified objects O1 ,…, Om have estimates upon criteria K 1 ,…, K n with the same numerical or verbal scales X l = {x 1 ,…, x h }, l = 1,…, n. In such cases, a multi-attributes object Oi , i = 1,…, m evaluated by the expert s is represented by a multiset } { ( ) ( )

( )

rg ◦ rg , Ai = k Ai x11 ◦ x11 , . . . , k Ai x1h ◦ x1h ; k Ai (r1 ) ◦ r1 , . . . , k Ai (4.24) and evaluated by all experts is represented by a multiset Ai =

t ∑

c Ai

s=1

( ) ( ) } { ( ) = k Ai x 1 ◦ x 1 , . . . , k Ai x h ◦ x h ; k Ai (r1 )◦r1 , . . . , k Ai r g ◦ r g

(4.25)

over the set X ' = X ∪ R of meaningful and sorting attributes, which is a weighted sum of multisets Ai (4.24) of individual expert estimates. The collection of objects O1 ,…, Om||and|| the set X ' of attributes are specified || || or decision table, which is an by the matrix ‘Object-Attributes’ G' = ||ki' j || m×(h+g)

4.4 Demonstrative Example: Method MASKA

101

extended data matrix G. Rows of the matrix G' are multisets Ai (4.25) describing objects Oi . Columns present attributes x j ' ∈ X ' , j = 1,…, h + g, which are gradations of meaningful estimates x e ∈ X, e = 1,…, h on rating scales of criteria K l or sorting attributes r 1 ,…, r g . Elements k ij ' are multiplicities k Ai (x j ' ) of corresponding attributes xj ' . The classes D1 ,…, Dg of objects and the set X ' of attributes ||are specified by an || || ' || ' . Rows aggregated decision table—the matrix ‘Classes-Attributes’ L = ||k f j || g×(h+g)

of the matrix L' are multisets C f describing classes Df , f = 1,…, g, which are sums of multisets Ai (4.25) describing objects included in the classe Df . Columns present attributes x j ' ∈ X ' , which are gradations x e ∈ X of meaningful estimates on rating scales of criteria K l or sorting attributes r 1 ,…, r g . Elements k fj ' are multiplicities k Cf (x j ' ) of corresponding attributes x j ' .

4.4 Demonstrative Example: Method MASKA Using the MASKA method, let us classify, with teachers, ten objects (pupils) O1 ,…, O10 , which are evaluated by two experts (per two semesters) upon eight qualitative criteria (the studied subjects) K 1 ,…, K 8 and are presorted into the best objects and the worst objects. All experts are equally competent c = 1. An importance wl of the criterion K l can be different or the same for experts. It is required to generate group decision rules for dividing objects into two classes: Da of more preferable and Db of less preferable objects; find correctly and incorrectly classified objects; interpretate the obtained classes. Consider firstly the simplest case of individual classification, when objects are evaluated by only one expert (namely, the expert 1), criteria K 1 ,…, K 8 have the same five-point rating scale X = {x 1 , x 2 , x 3 , x 4 , x 5 }, where x 1 is 1/very bad, x 2 is 2/bad, x 3 is 3/satisfactory, x 4 is 4/good, x 5 is 5/excellent. Each object Oi , i = 1,…, 10 is specified with a multiset into the form (4.25) ( ) } { ( ) Ai = k Ai x 1 ◦ x 1 , . . . , k Ai x 5 ◦ x 5 ; k Ai (ra ) ◦ ra , k Ai (rb ) ◦ rb , where the multiplicities k Ai (x e ) of meaningful attributes x e , e = 1,…, 5 are equal to numbers of gradations of object estimates for all criteria, and the multiplicities k Ai (r a ) and k Ai (r b ) of sorting attributes r a and r b , respectively, equal to sums of numbers of the best estimates x 5 , x 4 and sums of numbers of the worst estimates||x 3 , ||x 2 , x 1 of || || objects. Objects and their attributes are shown in the decision table G' = ||ki' j || 10×(5+2)

(Table 4.1) that is an extended data matrix G (Table 3.2). Elements k ij ' of matrix G' are multiplicities of attributes. To classify multi-attributes objects, we shall use Algorithm A. Let us assign objects to the classes Da and Db by votes’ majority rules (4.12), (4.13) in the following form:

102

4 Group Classification of Multi-attribute Objects

Table 4.1 Decision table G' O\X '

x1

x2

x3

x4

x5

ra

rb

A1

0

0

0

4

4

8

0

A2

2

4

1

1

0

1

7

A3

5

0

1

2

0

2

6

A4

0

1

1

3

3

6

2

A5

0

0

0

7

1

8

0

A6

0

0

0

4

4

8

0

A7

2

2

3

1

0

1

7

A8

0

1

2

3

2

5

3

A9

1

3

4

0

0

0

8

A10

0

0

1

2

5

7

1

“If a sum of numbers r a of the best estimates of an object is greater than or equal to a sum of numbers r b of the worst estimates, then an object is included in the more preferable class Da . Otherwise, an object is included in the less preferable class Db ”. According to these rules, objects O1 , O4 , O5 , O6 , O8 , O10 belong to the more preferable class Da , and objects O2 , O3 , O7 , O9 belong to the less preferable class Db . The classes Da and Db are described by multisets } { C a = 0 ◦ x 1 , 2 ◦ x 2 , 4 ◦ x 3 , 23 ◦ x 4 , 19 ◦ x 5 ; 42 ◦ ra , 6 ◦ rb , } { C b = 10 ◦ x 1 , 9 ◦ x 2 , 9 ◦ x 3 , 4 ◦ x 4 , 0 ◦ x 5 ; 4 ◦ ra , 28 ◦ rb , obtained by adding multisets that represent appropriate||objects. The multisets C a || || ' || and C b form rows of the aggregated decision table L = ||ki j || (Table 4.2). 2×(5+2) || || || || Taking into consideration the inverted aggregated decision table L = ||ki' j || , (5+2)×2

let us construct substantial and categorical multisets

Q 1 = {0 ◦ ya , 10 ◦ yb }, Q 2 = {2 ◦ ya , 9 ◦ yb }, Q 3 = {4 ◦ ya , 9 ◦ yb }, Q 4 = {23 ◦ ya , 4 ◦ yb }, Q 5 = {19 ◦ ya , 0 ◦ yb }, Ra = {42 ◦ ya , 4 ◦ yb }, Rb = {6 ◦ ya , 28 ◦ yb }.

Table 4.2 Aggregated decision table L (first case) D\X '

x1

x2

x3

x4

x5

ra

rb

Ca

0

2

4

23

19

42

6

Cb

10

9

9

4

0

4

28

4.4 Demonstrative Example: Method MASKA

103

) ∑ | ( ) ( )| ( We shall use the metric d Q u , Q v = p |k Q u y p − k Q v y p |, p = a, b (4.6) as a measure of multiset proximity in the metric space (Q , d), Q = {Q1 ,…, Q5 ; Ra , Rb } assuming all attributes yp to be equally important (wp = 1). The distance between categorical multisets is d ∗ = d(Ra , Rb ) = |42 − 6| + |4 − 28| = 60. A solution to the only optimization problem (4.16) is the following substantial multisets: Q a∗ = Q 4 + Q 5 = {42 ◦ ya , 4 ◦ yb },

Q ∗b = Q 1 + Q 2 + Q 3 = {6 ◦ ya , 28 ◦ yb }.

Among all possible combinations of pairs of substantial multisets, the multisets Qa * and Qb * are at the maximum distance ) ( d Q a∗ , Q ∗b = |42 − 6| + |4 − 28| = 60, which is equal to the distance d* = d(Ra , Rb ) between categorical multisets. The multiset Qa * defines a subset X a * = {x 4 , x 5 } of classifying attributes that characterize the class Da . The multiset Qb * defines a subset X b * = {x 1 , x 2 , x 3 } of classifying attributes that characterize the class Db . The generalized decision rules (4.19), (4.20) for classifying multi-attributes objects by assessments of the expert 1, written in natural language, look like this: “If an object has the estimates ‘4/good’ or ‘5/excellent’ by most criteria, then an object belongs to the more preferable class Da ”. “If an object has the estimates ‘1/very bad’, ‘2/bad’ or ‘3/satisfactory’ by most criteria, then an object belongs to the less preferable class Db ”. Thus, the class Da includes objects O1 , O4 , O5 , O6 , O8 , O10 , and the class Db includes objects O2 , O3 , O7 , O9 . The objects’ classification by assessments of the expert 1 is nominal and has the following schematic form:

P ⇔ [O1 , O4 , O5 , O6 , O8 , O10 ], [O2 , O3 , O7 , O9 ]. M

(4.26)

We pay attention to such important points. Firstly, the object classification P M (4.26) according to the generalized decision rules (4.19), (4.20) does not differ from the classification according to the “votes’ majority” rules (4.12), (4.13). Secondly, the distances d(Ra , Rb ) and d(Qa *, Qb *) are equal to each other, and a degree of approximation of all meaningful features V l = 1. These circumstances are explained by the fact that individual decision rules for expert sorting of objects do not contain contradictions. Therefore, the generalized rules for classifying are immediately produced to be consistent, and the classification of objects to be non-contradictory. This eliminates the need to use Algorithm B. In general case, this may not take place. The classification results can be made more concrete if additional information about expert preferences will be obtained. Let us assume that different grades of

104

4 Group Classification of Multi-attribute Objects

estimates for each criterion have different significance for a decision maker/expert. For example, grades are ordered by preference as x 1 ≺ x 2 ≺ x 3 ≺ x 4 ≺ x 5 . Suppose that the expert 1 specifies the preference for objects O1 ,…, O10 using the lexicographic ordering method. At the first place, there is an object with the maximal number of the highest estimates x 5 ‘5/excellent’. If there are several objects with the same number of the highest estimates x 5 , then these objects occupy places depending on a number of the estimates x 4 ‘4/good’. If numbers of the estimates x 5 and x 4 are equal, objects are ordered by a number of the estimates x 3 ‘3/satisfactory’. Then objects are sorted one by one according to numbers of the estimates x 2 and x 1 . The final classification of objects according to assessments of the expert 1 is as follows:

P ⇔ [O10 , (O1 , O6 ), O4 , O8 , O5 ], [O3 , O7 , O2 , O9 ]. Ʌ

(4.27)

Here, objects, that occupy the same place in the class, are enclosed in round brackets. Note that the place of the country in the team rating at the Olympic Games is determined exactly in the same way (by numbers of the medals won). Now let the expert 1 specifies the preference for objects O1 ,…, O10 by a weighted ∑ sum of estimates into the form (2.1), which serves as a total value function v(Oi ) = 5e=1 we v e (Oi ) of the object Oi , i = 1,…, 10. In this case, a partial value function ve (Oi ) is a number of appropriate grades x 1 , x 2 , x 3 , x 4 , x 5 of estimates, and a significance of each grade is taken to be equal to its numerical value: w1 = 1, w2 = 2, w3 = 3, w4 = 4, w5 = 5. Then, according to Table 4.1, we get: v(O1 ) = 36, v(O2 ) = 17, v(O3 ) = 16, v(O4 ) = 33, v(O5 ) = 33, v(O6 ) = 36, v(O7 ) = 22, v(O8 ) = 30, v(O9 ) = 17, v(O10 ) = 36. Classification of objects by a sum of estimates of the expert 1 is:

P ∑ ⇔ [(O1 , O6 , O10 ), (O4 , O5 ), O8 ], [O7 , (O2 , O9 ), O3 ].

(4.28)

Here, objects that have the same sums of estimates are enclosed in round brackets. Note that the total scores of schoolchildren in academic subjects during the Unified State Examination in Russia are determined in a similar way. The classification P M (4.26) of multi-attribute objects according to estimates of the expert 1, formed by the MASKA method, coincides generally with the classification P K (4.8) obtained using the KLAVA-HI method for hierarchical clustering. However, if we take into account the lexicographic preference of objects, then the order of their distribution by clusters in the classification PɅ (4.27) does not coincide with the distribution within classes in the classification P K (4.8) by the KLAVA-HI method. Places of objects in the classification P ∑ (4.28) by a sum of estimates and in the classification PK differ slightly. In the classification P Ʌ , only one object O10 is the best, the objects O1 and O6 are a little bit worse. In the classification P ∑ , there are three best objects O1 , O6 , O10 . These results differ

4.4 Demonstrative Example: Method MASKA

105

from the classification P K , where two objects O1 and O6 are the best. The object O9 is the worst in the classification P Ʌ , and the object O3 is the worst in the classification P ∑ . The last result coincides with the classification P K . It is also quite simple to solve tasks of collective ordinal classification of objects described by many quantitative and/or qualitative attributes, when objects are present in several different versions, several classes of solutions are given, and individual decision rules for the initial sorting of objects are inconsistent or contradictory. Consider a more complicated case when objects O1 ,…, O10 are evaluated by two equally competent experts (c = c = 1) upon criteria K 1 ,…, K 8 , which have different scales X l = {x l 1 ,…, xlhl }, l = 1,…, 8 with the same estimate grades as above. Now the object Oi , i = 1,…, 10 exists in two versions and is represented by a sum of multisets into the form (4.11): ( ) { ( ) Ai =(Ai + Ai = ( k Ai) x11 ◦ x11 , . . . , k Ai x15 ◦ x15 ; . .}. ; ) k Ai x81 ◦ x81 , . . . , k Ai x85 ◦ x85 ; k Ai (ra ) ◦ ra , k Ai (rb ) ◦ rb over the extended set X ' = {x 1 1 ,…, x 1 5 ;…; x 8 1 ,…, x 8 5 ; r a , r b } of meaningful and sorting attributes. ' || The || collection of objects and their attributes are described by the matrix H = ||ki j || (Table 4.3), which is an extended decision table H (Table 3.5). 10×(40+2) Elements of matrix in the columns xlel are numbers k Ai (xlel ) of experts who gave the corresponding estimate to an object for each attribute K l . Elements in the columns r a and r b are numbers k Ai (r a ) and k Ai (r b ) of experts who assigned an object to the class Da or class Db . Table 4.3 Decision table H' O\X '

x1 1

x1 2

x1 3

x1 4

x1 5

x2 1

x2 2

x2 3

x2 4

x2 5

x1 1

x3 2

x3 3

x3 4

x3 5

x4 1

x4 2

x4 3

x4 4

x4 5

A1

0

0

0

1

1

0

0

0

0

2

0

0

0

1

1

0

0

0

0

2

A2

0

0

1

1

0

1

1

0

0

0

1

1

0

0

0

2

0

0

0

0

A3

2

0

0

0

0

1

1

0

0

0

0

0

2

0

0

2

0

0

0

0

A4

0

0

0

1

1

0

0

1

1

0

0

1

1

0

0

0

0

0

1

1

A5

0

0

0

1

1

0

0

0

1

1

0

0

1

1

0

0

0

0

2

0

A6

0

0

0

1

1

0

0

0

0

2

0

0

0

2

0

0

0

0

2

0

A7

0

0

1

1

0

1

1

0

0

0

1

1

0

0

0

0

0

1

1

0

A8

0

0

0

1

1

0

0

0

1

1

0

0

0

1

1

0

1

1

0

0

A9

0

0

1

1

0

0

1

1

0

0

0

1

1

0

0

1

1

0

0

0

A10

0

0

1

0

1

0

0

0

1

1

0

0

1

1

0

0

0

0

1

1

O\X '

x5 1

x5 2

x5 3

x5 4

x5 5

x6 1

x6 2

x6 3

x6 4

x6 5

x7 1

x7 2

x7 3

x7 4

x7 5

x8 1

x8 2

x8 3

x8 4

x8 5

ra

rb

A1

0

0

0

2

0

0

0

0

1

1

0

0

0

2

0

0

0

0

0

2

2

0

A2

0

0

1

1

0

0

1

1

0

0

0

1

1

0

0

0

2

0

0

0

0

2

A3

0

0

0

1

1

1

1

0

0

0

2

0

0

0

0

0

0

1

1

0

0

2

A4

0

0

0

2

0

0

0

0

0

2

0

0

1

1

0

0

0

0

1

1

2

0

A5

0

0

0

2

0

0

0

0

1

1

0

0

0

1

1

0

0

0

2

0

2

0

A6

0

0

0

2

0

0

0

0

1

1

0

0

0

0

2

0

0

0

1

1

2

0

A7

0

1

1

0

0

0

0

1

1

0

1

1

0

0

0

0

1

1

0

0

0

2

A8

0

0

1

1

0

0

0

0

1

1

0

0

0

1

1

0

0

1

1

0

2

0

A9

0

1

1

0

0

0

0

2

0

0

0

1

1

0

0

0

0

2

0

0

0

2

A10

0

1

1

0

0

0

0

0

1

1

0

1

0

0

1

0

0

0

2

0

1

1

106

4 Group Classification of Multi-attribute Objects

Let us introduce the collective rules of votes’ majority (4.12), (4.13), which look like this: “An object is included in the more preferable class Da , if the majority of experts voted for this, that is k Ai (r a ) ≥ k Ai (r b ). Otherwise, an object is included in the less preferred class Db ”. According to these rules, objects O1 , O4 , O5 , O6 , O8 , O10 belong to the more preferable class Da , and objects O2 , O3 , O7 , O9 belong to the less preferable class Db . Note, however, that the individual decision rules for sorting of the object O10 , generally speaking, are contradictory, since the experts’ votes were divided equally. Although formally the object O10 was included in the class Da . The classes Da and Db are specified by multisets into the form (4.15): { C a = 0 ◦ x11 , 0 ◦ x12 , 1 ◦ x13 , 5 ◦ x14 , 6 ◦ x15 ; 0 ◦ x21 , 0 ◦ x22 , 1 ◦ x23 , 4 ◦ x24 , 7 ◦ x25 ; 0 ◦ x31 , 1 ◦ x32 , 3 ◦ x33 , 6 ◦ x34 , 2 ◦ x35 ; 0 ◦ x41 , 1 ◦ x42 , 1 ◦ x43 , 6 ◦ x44 , 4 ◦ x45 ; 0 ◦ x51 , 1 ◦ x52 , 2 ◦ x53 , 9 ◦ x54 , 0 ◦ x55 ; 0 ◦ x61 , 0 ◦ x62 , 0 ◦ x63 , 5 ◦ x64 , 7 ◦ x65 ; 0 ◦ x71 , 1 ◦ x72 , 1 ◦ x73 , 5 ◦ x74 , 5 ◦ x75 ; 0 ◦ x81 , 0 ◦ x82 , 1 ◦ x83 , 7 ◦ x84 , 4 ◦ x85 ; 11 ◦ ra , 1 ◦ rb } { Cb = 2 ◦ x11 , 0 ◦ x12 , 3 ◦ x13 , 3 ◦ x14 , 0 ◦ x15 ; 3 ◦ x21 , 4 ◦ x22 , 1 ◦ x23 , 0 ◦ x24 , 0 ◦ x25 ; 2 ◦ x31 , 3 ◦ x32 , 3 ◦ x33 , 0 ◦ x34 , 0 ◦ x35 ; 5 ◦ x41 , 1 ◦ x42 , 1 ◦ x43 , 1 ◦ x44 , 0 ◦ x45 ; 0 ◦ x51 , 2 ◦ x52 , 3 ◦ x53 , 2 ◦ x54 , 1 ◦ x55 ; 1 ◦ x61 , 2 ◦ x62 , 4 ◦ x63 , 1 ◦ x64 , 0 ◦ x65 ; 3 ◦ x71 , 3 ◦ x72 , 2 ◦ x73 , 0 ◦ x74 , 0 ◦ x75 ; 0 ◦ x81 , 3 ◦ x82 , 4 ◦ x83 , 1 ◦ x84 , 0 ◦ x85 ; 0 ◦ ra , 8 ◦ rb }. || multisets C a and C b form rows of the aggregated decision table L = || The || ' || (Table 4.4), elements of which are sums of elements of the decision ||ki j || 2×(40+2)

table H' (Table 4.3) for the corresponding columns xlel , r a , r b . Construct categorical and substantial multisets || help of Algorithm A, using || with || || . Categorical multisets the inverted aggregated decision table L−1 = ||k 'ji || (40+2)×2

are as follows: Table 4.4 Aggregated decision table L D\X '

x1 1

x1 2

x1 3

x1 4

x1 5

x2 1

x2 2

x2 3

x2 4

x2 5

x1 1

x3 2

x3 3

x3 4

x3 5

x4 1

x4 2

x4 3

x4 4

x4 5

Ca

0

0

1

5

6

0

0

1

4

7

0

1

3

6

2

0

1

1

6

4

Cb

2

0

3

3

0

3

4

1

0

0

2

3

3

0

0

5

1

1

1

0

D\X '

x5 1

x5 2

x5 3

x5 4

x5 5

x6 1

x6 2

x6 3

x6 4

x6 5

x7 1

x7 2

x7 3

x7 4

x7 5

x8 1

x8 2

x8 3

x8 4

x8 5

ra

rb

Ca

0

1

2

9

0

0

0

0

5

7

0

1

1

5

5

0

0

1

7

4

11

1

Cb

0

2

3

2

1

1

2

4

1

0

3

3

2

0

0

0

3

4

1

0

0

8

4.4 Demonstrative Example: Method MASKA

Ra = {11 ◦ ya , 0 ◦ yb },

107

Rb = {1 ◦ ya , 8 ◦ yb }.

The distance between them is d(Ra , Rb ) = |11 − 1| + |0 − 8| = 18. Substantial multisets are as follows: Q 11 = {0 ◦ ya , 2 ◦ yb }, Q 12 = {0 ◦ ya , 0 ◦ yb }, Q 13 = {1 ◦ ya , 3 ◦ yb }, Q 14 = {5 ◦ ya , 3 ◦ yb }, Q 15 = {6◦ ya , 0◦ yb }; Q 21 = {0 ◦ ya , 3 ◦ yb }, Q 22 = {0 ◦ ya , 4 ◦ yb }, Q 23 = {1 ◦ ya , 1 ◦ yb }, Q 24 = {4 ◦ ya , 0 ◦ yb }, Q 25 = {7 ◦ ya , 0 ◦ yb }; Q 31 = {0 ◦ ya , 2 ◦ yb }, Q 32 = {1 ◦ ya , 3 ◦ yb }, Q 33 = {3 ◦ ya , 3 ◦ yb }, Q 34 = {6◦ ya , 0 ◦ yb }, Q 35 = {2 ◦ ya , 0 ◦ yb }; Q 41 = {0 ◦ ya , 5◦ yb }, Q 42 = {1 ◦ ya , 1 ◦ yb }, Q 43 = {1 ◦ ya , 1 ◦ yb }, Q 44 = {6◦ ya , 1◦ yb }, Q 45 = {4◦ ya , 0◦ yb }; Q 51 = {0 ◦ ya , 0 ◦ yb }, Q 52 = {1 ◦ ya , 2 ◦ yb }, Q 53 = {2 ◦ ya , 3 ◦ yb }, Q 54 = {9 ◦ ya , 2◦ yb }, Q 55 = {0 ◦ ya , 1◦ yb }; Q 61 = {0 ◦ ya , 1 ◦ yb }, Q 62 = {0 ◦ ya , 2 ◦ yb }, Q 63 = {0 ◦ ya , 4 ◦ yb }, Q 64 = {5 ◦ ya , 1 ◦ yb }, Q 65 = {7 ◦ ya , 0 ◦ yb }; Q 71 = {0 ◦ ya , 3 ◦ yb }, Q 72 = {1 ◦ ya , 3 ◦ yb }, Q 73 = {1 ◦ ya , 2 ◦ yb }, Q 74 = {5◦ ya , 0◦ yb }, Q 75 = {5◦ ya , 0 ◦ yb }; Q 81 = {0 ◦ ya , 0 ◦ yb }, Q 82 = {0 ◦ ya , 3 ◦ yb }, Q 83 = {1 ◦ ya , 4 ◦ yb }, Q 84 = {7 ◦ ya , 1 ◦ yb }, Q 85 = {4 ◦ ya , 0 ◦ yb }. Let us find the classifying attributes for each criterion K l , l = 1,…, 8 by solving the corresponding optimization problems (4.16). Among all possible combinations of pairs of substantial multisets, the following pairs of multisets Qla *, Qlb * are at the maximum distances: Q ∗1a = Q 13 + Q 14 + Q 15 = {12 ◦ ya , 6 ◦ yb }, ∗ }, {0 ) Q 11 + Q 12 = ◦ ya , 2 ◦ yb ( ∗ Q 1b∗ = d Q 1a , Q 1b = |12 − 0| + |6 − 2| = 12 + 4 = 16; Q ∗2a = Q 24 + Q 25 = {11 ◦ ya , 0 ◦ yb }, = Q)21 + Q 22 + Q 23 = {1 ◦ ya , 8 ◦ yb }, Q ∗2b = |11 − 1| + |0 − 8| = 10 + 8 = 18;

∗ ( Q∗2b d Q 2a ,

Q ∗3a = Q 33 + Q 34 + Q 35 = {11 ◦ ya , 3 ◦ yb }, ∗ }, {1 ( ∗ Q 3b∗ = ) Q 31 + Q 32 = ◦ ya , 5 ◦ yb d Q 3a , Q 3b = |11 − 1| + |3 − 5| = 10 + 2 = 12;

108

4 Group Classification of Multi-attribute Objects

Q ∗4a = Q 44 + Q 45 = {10 ◦ ya , 1 ◦ yb }, ∗ }, {2 ( Q∗4b = ∗Q)41 + Q 42 + Q 43 = ◦ ya , 7 ◦ yb d Q 4a , Q 4b = |10 − 2| + |1 − 7| = 8 + 6 = 14; Q ∗5a = Q 53 + Q 54 + Q 55 = {11 ◦ ya , 6 ◦ yb }, ∗ }, {1 ( ∗ Q 5b∗ =) Q 51 + Q 52 = ◦ ya , 2 ◦ yb d Q 5a , Q 5b∗ = |11 − 1| + |6 − 2| = 10 + 4 = 14; Q ∗6a = Q 64 + Q 65 = {12 ◦ ya , 1 ◦ yb }, = Q)61 + Q 62 + Q 63 = {0 ◦ ya , 7 ◦ yb }, Q ∗6b = |12 − 0| + |1 − 7| = 12 + 6 = 18;

∗ ( Q∗6b d Q 6a ,

Q ∗7a = Q 74 + Q 75 = {10 ◦ ya , 0 ◦ yb }, ∗ }, {2 ( Q∗7b = ∗Q)71 + Q 72 + Q 73 = ◦ ya , 8 ◦ yb d Q 7a , Q 7b = |10 − 2| + |0 − 8| = 8 + 8 = 16; Q ∗8a = Q 84 + Q 85 = {11 ◦ ya , 1 ◦ yb }, = Q)81 + Q 82 + Q 83 = {1 ◦ ya , 4 ◦ yb }, Q ∗8b∗ = |11 − 1| + |1 − 7| = 10 + 6 = 16.

∗ ( Q∗ 8b d Q 8a ,

Quality of objects’ decomposition into the classes Da and Db for each group of meaningful attributes is determined by an approximation degree V l = d(Qla *, Qlb *)/d(Ra , Rb ), which is, respectively, equal to V 2 = V 6 = 1, V 1 = V 7 = V 8 = 8/9, V 4 = V 5 = 7/9, V 3 = 6/9. We shall assume that the level V 0 = 6/9 sets an acceptable quality of objects’ classification, that is, all meaningful attributes are the classifying attributes. The classifying attributes, characteristic of the class Da , form a subset { } { } { } { } X a∗ = x25 , x24 ∪ x65 , x64 ∪ x15 , x14 , x13 ∪ x75 , x74 ∪ } { } { } { } { ∪ x85 , x84 ∪ x45 , x44 ∪ x55 , x54 , x53 ∪ x35 , x34 , x33 ,

(4.29)

and attributes, characteristic of the class Db , form a subset } { } { } { } { X b∗ = x21 , x22 , x23 ∪ x61 , x62 , x63 ∪ x11 , x12 ∪ x71 , x72 , x73 ∪ { } { } { } { } ∪ x81 , x82 , x83 ∪ x41 , x42 , x43 ∪ x51 , x52 ∪ x31 , x32 .

(4.30)

Here, subsets (granules) X lp *, p = a, b of the classifying attributes are ordered by their significance. The most significant criteria are K 2 and K 6 , for which the approximation degree of individual sorting rules is maximum and equal to 1. Selecting these criteria as the main ones for classifying multi-attribute objects, we obtain the following generalized group decision rules (4.19), (4.20) in natural language: “If an object has the estimates ‘5/excellent’ or ‘4/good’ upon criteria K 2 and K 6 , then an object is included in the more preferable class Da ”.

4.4 Demonstrative Example: Method MASKA

109

“If an object has the estimates ‘1/very bad’, ‘2/bad’ or ‘3/satisfactory’ upon criteria K 2 and K 6 , then an object is included in the less preferable class Db ”. Thus, objects O1 , O4 , O5 , O6 , O8 , O10 belong to the class Da , and objects O2 , O3 , O7 , O9 belong to the class Db . As previously, the classes Da and Db are nominal and become ordinal if estimate grades on the scales of criteria K 1 ,…, K 8 will be ordered. The mentioned generalized decision rules for the group classification of multiattribute objects concretize more simple decision rules for classifying objects according to the majority of votes and take into account not the total number of estimates for all criteria, but estimates for the most significant criteria. If necessary, we can expand these generalized decision rules, including in them classifying attributes for other less significant criteria. We shall correct the generalized decision rules and, using Algorithm B, construct the improved group rules for classifying multi-attribute objects. These rules allow us to find correctly and incorrectly classified objects, as well as to identify conflicting individual expert rules for sorting of objects. Let us consider all combinations of the most significant classifying attributes x 2 5 , x 2 4 , x 6 5 , x 6 4 , which determine an object membership to the class Da , with other classifying attributes included in the subset X a * (4.29). For each classifying attribute, for a pair, triple, and so on of classifying attributes, find correctly and incorrectly classified objects and select the attributes xueu ∗ ∈ X a *, xvev ∗ ∈ X a *,…, which provide the maximum difference N a − N ac between numbers N a of correctly and N ac of incorrectly classified objects. The obtained results are presented in Table 4.5, where the attributes corresponding to the maximum difference N a − N ac are marked in bold. As follows from Table 4.5, two attributes x 6 5 * and x 1 5 * provide the largest number N a = 6 of correctly classified objects and the smallest number N ac = 0 of incorrectly classified objects included in the more preferable class Da that was built earlier. These attributes are included in the improved group rule (4.21) to form the corrected class Da \Dac of unconditionally preferable objects. In the demonstrative example, all objects O1 , O4 , O5 , O6 , O8 , O10 are correctly classified by the attributes x 6 5 * and x 1 5 *. In a like manner, consider all combinations of the most significant classifying attributes x 2 1 , x 2 2 , x 2 3 , x 6 1 , x 6 2 , x 6 3 , which determine an object membership to the class Db , with other classifying attributes included in the subset X b * (4.30). Find correctly and incorrectly classified objects and select the attributes x u eu* ∈ X b *, x v ev* ∈ X b *,…, which provide the maximum difference N b − N bc between numbers N b of correctly and N bc of incorrectly classified objects. The obtained results are presented in Table 4.6, where the attributes corresponding to the maximum difference N b − N bc are marked in bold. As follows from Table 4.6, only one attribute x 2 2 * provides the largest number N b = 4 of correctly classified objects and the smallest number N bc = 0 of incorrectly classified objects included in the less preferable class Db that was built earlier. This attribute is included in the improved group rule (4.22) to form the corrected class Db \Dbc of unconditionally non-preferable objects. In the demonstrative example, all objects O2 , O3 , O7 , O9 are correctly classified by the attribute x 2 2 *.

110

4 Group Classification of Multi-attribute Objects

Table 4.5 Attribute combinations for assigning an object into the class Da \Dac Attributes x2 5 x2 4 x6 5 * x6 4 x6 5 , x2 5 x6 5 , x2 4 x6 5 *, x1 5 * x 6 5 *, x 1 4 x 6 5 *, x 1 3 x 6 5 *, x 7 5 x 6 5 *, x 7 4 x 6 5 *, x 8 5 x 6 5 *, x 8 4 x 6 5 *, x 4 5 x 6 5 *, x 4 4 x 6 5 *, x 5 5 x 6 5 *, x 5 4 x 6 5 *, x 5 3 x 6 5 *, x 3 5 x 6 5 *, x 3 4 x 6 5 *, x 3 3 x 6 5 *, x 1 5 *, x 2 5 x 6 5 *, x 1 5 *, x 2 4 x 6 5 *, x 1 5 *, x 7 5 x 6 5 *, x 1 5 *, x 7 4 x 6 5 *, x 1 5 *, x 8 5 x 6 5 *, x 1 5 *, x 8 4 x 6 5 *, x 1 5 *, x 4 5 x 6 5 *, x 1 5 *, x 4 4 x 6 5 *, x 1 5 *, x 5 5 x 6 5 *, x 1 5 *, x 5 4 x 6 5 *, x 1 5 *, x 5 3 x 6 5 *, x 1 5 *, x 3 5 x 6 5 *, x 1 5 *, x 3 4 x 6 5 *, x 1 5 *, x 3 3

Correctly classified objects

Incorrectly classified objects

Na

N ac

N a − N ac

O1 , O5 , O6 , O8 , O10

O4

5

1

4

O4 , O5 , O8 , O10

O1 , O6

4

2

2

6

0

6

O4

5

1

4

O1 , O4 , O5 , O6 , O8 , O10 O1 , O5 , O6 , O8 , O10 O1 , O5 , O6 , O8 , O10

O4

5

1

4

O4 , O5 , O8 , O10

O1 , O6

4

2

2

6

0

6

O10

5

1

4

O1 , O4 , O5 , O6 , O8 , O10 O1 , O4 , O5 , O6 , O8 , O10

O1 , O4 , O5 , O6 , O8

1

5

– 4

O5 , O6 , O8 , O10

O1 , O4

4

2

2

O1 , O4 , O5 , O8

O6 , O10

4

2

2

O1 , O4 , O6

O5 , O8 , O10

3

3

0

O4 , O5 , O6 , O8 , O10

O1

5

1

4

O1 , O4 , O10

O5 , O6 , O8

3

3

0

O4 , O5 , O6 , O10

O1 , O8

4

2

2

O1 , O4 , O5 , O6 , O8 , O10

0

6

– 6

O1 , O4 , O5 , O6 , O8

O10

5

1

4

O8 , O10

O1 , O4 , O5 , O6

2

4

– 2

O1 , O8

O4 , O5 , O6 , O10

2

4

– 2

O1 , O5 , O6 , O8 , O10

O4

5

1

4

O4 , O5 , O10

O1 , O6 , O8

3

3

0

O1 , O5 , O6 , O8 , O10

O4

5

1

4

O4 , O5 , O8 , O10

O1 , O6

4

2

2

O5 , O6 , O8 , O10

O1 , O4

4

2

2

O1 , O4 , O5 , O8

O6 , O10

4

2

2

O1 , O4 , O6

O5 , O8 , O10

3

3

0

O4 , O5 , O6 , O8 , O10

O1

5

1

4

O1 , O4 , O10

O5 , O6 , O8

3

3

0

O4 , O5 , O6 , O10

O1 , O8

4

2

2

O1 , O4 , O5 , O6 , O8 , O10

0

6

– 6

O1 , O4 , O5 , O6 , O8

O10

5

1

4

O8 , O10

O1 , O4 , O5 , O6

2

4

– 2

O1 , O8

O4 , O5 , O6 , O10

2

4

– 2

O1 , O5 , O6 , O8 , O10

O4

5

1

4

O4 , O5 , O10

O1 , O6 , O8

3

3

0

The improved decision rules for the group classification of multi-attribute objects by assessments of two experts are written in natural language as follows: “If an object has the estimate ‘5/excellent’ upon the criteria K 6 and K 1 , then an object belongs to the more preferable class Da ”.

4.4 Demonstrative Example: Method MASKA

111

Table 4.6 Attribute combinations for assigning an object into the class Db \Dbc Attributes

Correctly classified objects

Incorrectly classified objects

Nb

N bc

N b − N bc

x2 1

O2 , O3 , O7

O9

3

1

2

x2 2 *

O2 , O3 , O7 , O9

4

0

4

x2 3

O9

O2 , O3 , O7

1

3

– 2

x6 1

O3

O2 , O7 , O9

1

3

– 2

x6

2

x6 3 x 2 2 *, x 6 1 x2

2 *,

O7 , O9

2

2

0

O3

3

1

2

O3

O2 , O7 , O9

1

3

– 2

O2 , O3

O7 , O9

2

2

0

x 2 2 *, x 6 3

O2 , O7 , O9

O3

3

1

2

x 2 2 *, x 1 1

O3

O2 , O7 , O9

1

3

– 2

O2 , O3 , O7 , O9

0

4

– 4

x 2 2 *, x 7 1

O3 , O7

O2 , O9

2

2

0

x 2 2 *, x 7 2

O2 , O7 , O9

O3

3

1

2

x2

x2

2 *,

2 *,

x6

2

O2 , O3 O2 , O7 , O9

x1

x7

2

3

O2 , O9

x 2 2 *, x 8 1 x 2 2 *, x 8 2

O2 , O7

2

2

0

0

4

– 4

O3 , O9

2

2

0

3

O3 , O7 , O9

O2

3

1

2

x 2 2 *, x 4 1

O2 , O3 , O9

O7

3

1

2

x 2 2 *, x 4 2

O9

O2 , O3 , O7

1

3

– 2

x2

x2

2 *,

O3 , O7 O2 , O3 , O7 , O9

2 *,

x8

x4

3

O7

x 2 2 *, x 5 1 x 2 2 *, x 5 2 x2

2 *,

x3

1

x 2 2 *, x 3 2

O7 , O9

O2 , O3 , O9

1

3

– 2

O2 , O3 , O7 , O9

0

4

– 4

O2 , O3

2

2

0

O2 , O7

O3 , O9

2

2

0

O2 , O7 , O9

O3

3

1

2

“If an object has the estimate ‘2/bad’ upon the criterion K 2 , then an object belongs to the less preferable class Db ”. Thus, objects O1 , O4 , O5 , O6 , O8 , O10 are included in the class Da , and objects O2 , O3 , O7 , O9 are included in the class Db . The group classification of objects by assessments of two experts looks like this: gr PM ⇔ [O1 , O4 , O5 , O6 , O8 , O10 ], [O2 , O3 , O7 , O9 ].

(4.31)

As above, the classes Da and Db are nominal and become ordinal if estimate grades on the scales of criteria K 1 ,…, K 8 will be ordered. The classification P M gr (4.31) of multi-attribute objects, that is formed by the MASKA method according to assessments of two experts, coincides, in general, with the classification P M (4.26) by assessments of the expert 1 and with the classification P K gr (4.7) obtained using the KLAVA-HI method for hierarchical clustering.

112

4 Group Classification of Multi-attribute Objects

To arrange objects within the classes, we shall attract, as before, additional information about preferences of experts, assuming for simplicity that preferences of each expert are specified in the same way as in the case of classification of objects according to assessments of the expert 1. When using the lexicographic ordering of objects O1 ,…, O10 we obtain the following group classification of objects: gr PɅ ⇔ [(O1 , O6 ), (O5 , O8 , O10 ), O4 ], [O9 , (O2 , O7 ), O3 ].

(4.32)

Here, objects that occupy the same place in the class are enclosed in round brackets. When preference ∑ of the object Oi is determined by a weighted sum of estimate grades v(Oi ) = 5e=1 we v e (Oi ), the value functions of objects O1 ,…, O10 calculated according to Table 4.3 are v(O1 ) = 75, v(O2 ) = 36, v(O3 ) = 34, v(O4 ) = 65, v(O5 ) = 67, v(O6 ) = 71, v(O7 ) = 40, v(O8 ) = 64, v(O9 ) = 44, v(O10 ) = 62. The group classification of objects, based on sums of estimates of two experts, is as follows: gr P ∑ ⇔ [O1 , O6 , O5 , (O4 , O8 ), O10 ], [O9 , O7 , O2 , O3 ].

(4.33)

Here, objects that have almost the same sums of estimates, differing by 1, are enclosed in round brakets. g In the classification P Ʌr (4.32), there are two best objects O1 and O6 . In the gr classification P ∑ (4.33), there are three best objects O1 , O6 , O6 . These results differ from the classification P K gr (4.7), where two objects O4 and O8 are the best. The gr gr distribution of more preferable objects in the classifications P Ʌ and P ∑ does not coincide with the classification P K gr . The object O3 is the worst in the classifications gr gr PɅ and P ∑ that coincides with the classification P K gr . As follows from Tables 4.5 and 4.6, in the group classification of objects P M gr (4.31), that was built on the attributes x 6 5 *, x 1 5 * and x 2 2 *, there are no incorrectly classified objects in the classes Da and Db . The class Dc = Dac ∪ Dbc of objects with conflicting individual expert rules is empty. However, there is one contradiction between the individual expert rules. So, opinions of the experts 1 and 2 differed when sorting of the object O10 (see Table 4.3). Consider in more detail the individual expert rules for sorting of the object O10 represented by the multisets A10 , A10 , which describe two versions of this object (Table 4.7). The rule of the expert 1 fully coincides both the generalized and improved decision rules for group classification: the object O10 belongs to the class Da (r a = 1; x 6 5 = 1, x 2 5 = 1, x 1 5 = 1 and x 2 2 = 0). The rule of the expert 2 is controversial. This expert included the object O10 in the class Db (r b = 1). But according to the generalized rule, the object O10 has to belong to the class Da (x 2 4 = 1, x 6 4 = 1). According to the improved rule, the object O10 is not included in the class Da (x 6 5 = 0, x 1 5 = 0) or in the class Db (x 2 2 = 0).

References

113

Table 4.7 Versions of object O10 represented by multisets O\X '

x1 1

x1 2

x1 3

x1 4

x1 5

x2 1

x2 2

x2 3

x2 4

x2 5

x1 1

x3 2

x3 3

x3 4

x3 5

x4 1

x4 2

x4 3

x4 4

x4 5

A10

0

0

0

0

1

0

0

0

0

1

0

0

0

1

0

0

0

0

0

1

A10

0

0

1

0

0

0

0

0

1

0

0

0

1

0

0

0

0

0

1

0

O\X '

x5 1

x5 2

x5 3

x5 4

x5 5

x6 1

x6 2

x6 3

x6 4

x6 5

x7 1

x7 2

x7 3

x7 4

x7 5

x8 1

x8 2

x8 3

x8 4

x8 5

ra

rb

A10

0

0

1

0

0

0

0

0

0

1

0

0

0

0

1

0

0

0

0

1

1

0

A10

0

1

0

0

0

0

0

0

1

0

0

1

0

0

0

0

0

0

1

0

0

1

Note that information about whether object attributes are numeric, symbolic, or verbal was not used anywhere in constructing of decision rules. In the MASKA method, all multisets representing multi-attribute objects are formed only by adding numbers of object estimates by experts and upon criteria. Therefore, the results of object classification do not depend on the order of estimates’ summation. Thus, the MASKA method for group classification of objects according to aggregated decision rules satisfies the principle of invariance of individual preferences’ aggregation.

References 1. 2. 3. 4. 5. 6.

7.

8.

9. 10. 11.

12.

13. 14.

Anderberg, M.R.: Cluster Analysis for Applications. Academic Press, New York (1973) Hartigan, J.A.: Clustering Algorithms. Wiley, New York (1975) Jambu, M.: Classification automatique pour l’analyse des donnees. Bordas, Paris (1978) Mirkin, B.: Mathematical Classification and Clustering. Kluwer Academic Publishers, Dordrecht (1996) Miyamoto, S.: Cluster Analysis as a Tool of Interpretation of Complex Systems. Working paper WP-87–41. IIASA, Laxenburg (1987) Petrovsky, A.B.: Cluster analysis in multiset spaces. In: Information Systems Technology and its Applications. Lecture notes in Informatics, vol. 30, pp. 109–119. Gesellschaft für Informatik, Bonn (2003) Petrovsky, A.B.: Mnogokriterial’noe prinyatie resheniy po protivorechivym dannym: podkhod teorii mul’timnozhestv (Multicriteria decision making on contradictory data: an approach of multiset theory). Informatsionnye tekhnologii i vychislitel’nye sistemy (Information technologies and computing systems) 2, 56–66 (2004). (in Russian) Petrovsky, A.B.: Inconsistent preferences in verbal decision analysis. In: Papers from IFIP WG8.3 International Conference on Creativity and Innovation in Decision Making and Decision Support, vol. 2, pp. 773–789. Ludic Publishing Ltd., London (2006) Petrovsky, A.B.: Multiple criteria decision making: discordant preferences and problem description. J. Syst. Sci. Syst. Eng. 16(1), 22–33 (2007) Petrovsky, A.B.: Group verbal decision analysis. In: Encyclopedia of Decision Making and Decision Support Technologies, vol. 1, pp. 418–425. IGI Global, Hershey, New York (2008) Petrovsky, A.B.: Clustering and sorting multi-attribute objects in multiset metric space. In: The Forth International IEEE Conference on Intelligent Systems. Proceedings, vol. 2, pp. 11-44– 11-48. IEEE, Sofia (2008) Petrovsky, A.B.: Group multiple criteria decision making: multiset approach. In: Recent Developments and New Directions in Soft Computing. Studies in Fuzziness and Soft-Computing, vol. 317, pp. 19–33. Switzerland Springer International Publishing (2014) Petrovsky, A.B.: Group Verbal Decision Analysis. Nauka, Moscow (2019).(in Russian) Petrovsky, A.B.: Structuring techniques in multiset spaces. In: Multiple Criteria Decision Making, pp. 174–184. Springer-Verlag, Berlin (1997)

114

4 Group Classification of Multi-attribute Objects

15. Petrovsky, A.B.: Prostranstva mnozhestv i mul’timnozhestv (Spaces of sets and multisets). Editorial URSS, Moscow (2003).(in Russian) 16. Petrovsky, A.B.: Methods for the group classification of multi-attribute objects (Part 1). Sci. Tech. Inf. Process. 37(5), 346–356 (2010) 17. Petrovsky, A.B.: Group classification of objects with qualitative attributes: multiset approach. Stud. Comput. Intell. 299, 73–97 (2010) 18. Petrovsky, A.B.: Teoriya prinyatiya resheniy (Theory of decision making). Publishing Center “Academy”, Miscow (in Russian) (2009) 19. Podinovskiy, V.V.: Idei i metody vazhnosti kriteriev v mnogokriterial’nykh zadachakh prinaytiya resheniy (Ideas and methods of the criteria importance in multicriteria decision making problems). Nauka, Moscow (2019).(in Russian) 20. Komarova, N.A., Petrovsky, A.B.: Metod soglasovannoy gruppovoy klassifikatsii mnogopriznakovykh ob”yektov (Method of consistent group classification of multi-attribute objects). In: Podderzhka prinyatiya resheniy. Trudy Instituta sistemnogo analiza RAN. (Decision support. Proceedings of the Institute for System Analysis of the Russian Academy of Sciences). vol. 35, pp. 19–32. LKI Publishing House, Moscow (2008). (in Russian) 21. Petrovsky, A.B.: Method for approximation of diverse individual sorting rules. Informatica 12(1), 109–118 (2001) 22. Petrovsky, A.B.: Multi-attribute sorting of qualitative objects in multiset spaces. In: Multiple Criteria Decision Making in the New Millenium. Lecture Notes in Economics and Mathematical Systems, vol. 507, pp. 124–131. Springer-Verlag, Berlin (2001) 23. Petrovsky, A.B.: Multiple criteria project selection based on contradictory sorting rules. In: Information Systems Technology and its Applications. Lecture Notes in Informatics, vol. 2, pp. 199–206. Gesellshaft für Informatik, Bonn (2001) 24. Petrovsky, A.B.: Group sorting and ordering multiple criteria alternatives. In: Computational Intelligence in Decision and Control. Proceedings of the 8th International FLINS Conference, pp. 605–610. World Scientific Publisher, Singapore (2008) 25. Petrovsky, A.B.: Methods for the group classification of multi-attribute objects (Part 2). Sci. Tech. Inf. Process. 37(5), 357–368 (2010) 26. Petrovsky, A.B.: Method “Maska” for group expert classification of multi-attribute objects. Dokl. Math. 81(2), 317–321 (2010)

Chapter 5

Reducing Dimensionality of Attribute Space

This chapter suggests a new approach to reduce a dimensionality of an attribute space that is considered as a solution to the task of verbal multicriteria classification. We specify original methods for hierarchical decreasing numbers of criteria and attributes, when vectors/tuples or multisets of their numerical and/or verbal characteristics represent multi-attribute objects. Examples of methods’ applications are given.

5.1 Hierarchical Structuring Criteria and Attributes: Method HISCRA In real situations, decision makers/experts are very difficult to select the best object, rank or classify objects that are described by manifold attributes. This is because, as a rule, many objects are formally incomparable in their features. Additional difficulties arise in cases of ill-structured problems that combine quantitative and qualitative dependencies, modeling of which is either impossible in principle or very difficult. The following approaches are possible that facilitate the choice in a large space of attributes and diminish information loss. Namely, the use of psychologically correct operations of obtaining information from decision makers and experts; reduction of attribute space dimensionality. It has been experimentally established that it is easier for a person, due to peculiarities of his/her physical memory, to operate with small amounts of data, to compare objects by a small number of indicators. For this, it is quite enough to describe objects with three-seven indicators. At the same time, a person makes fewer mistakes when indicators have not numerical, but verbal scales. The results of such operations are more reliable and easier to analyze [1–4].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. B. Petrovsky, Group Verbal Decision Analysis, Studies in Systems, Decision and Control 451, https://doi.org/10.1007/978-3-031-16941-0_5

115

116

5 Reducing Dimensionality of Attribute Space

Dimensionality reduction of object description is decreasing a number of indicators that characterize properties, states or functions of objects, using data transformations in which a set of initial attributes K 1 , . . . , K n is aggregated into smaller sets of new intermediate attributes L 1 , . . . , L m and final attributes N1 , . . . , Nl [5]. Transformations of features can be formally written as K 1 , . . . , K n → L 1 , . . . , L m → . . . → N 1 , . . . , Nl .

(5.1)

} { The initial attribute K i has the scale X i = xi1 , . . . , xih i , i = 1, . . . , n, the } { g intermediate attribute L j has the scale Y j = y 1j , . . . , y j j , j = 1, . . . , m, the final } { f attribute N k has the scale Z k = z k1 , . . . , z k k , k = 1, . . . , l, l < m < n. Reducing dimensionality of attribute space is an informal multistage procedure based on knowledge, experience and intuition of a decision maker/expert who forms the rules for attributes’ transformations, establishes the structure, number, dimension and meaningful context of new indicators. Reducing a number of variables simplifies the solution to multicriteria choice problems. Methods for reducing dimensionality of attribute space depend on the features of aggregated attributes and characteristics of their scales. Almost all of applied methods for shortening a space dimensionality work with numerical attributes [5–7]. These circumstances dictate the need to create ways to reduce a dimensionality of qualitative attribute space [8, 9]. The method for hierarchical structuring criteria and attributes (HISCRA) is intended to aggregate object attributes and reduce a dimensionality of space where multi-attribute objects are represented by vectors/tuples [9–12]. In these cases, the task (5.1) has a form: X 1 × · · · × X n → Y1 × · · · × Ym → · · · → Z 1 × . . . × Z l ,

(5.2)

The dimensionality of attribute space is defined as the cardinality of the direct product of numerical or verbal attribute gradations, which are components of vectors/tuples. When reducing a space dimensionality, a representation of multi-attribute objects K 1 , . . . ), K n , let the is transformed as follows. In the space of initial attributes ( x . . . , q corresponds)to a vector/tuple , . . . , x pn , or a group object Op , p = 1, {( p1 )} (

, . . . , x p1 , . . . , x x . A vector/tuple of vectors/tuples pn p1 , . . . , x pn ( )

x p1 , . . . , x pn , s = 1, . . . , t describes a version O p of the object Op . A component

e

x pi is a value of the attribute K i that is equal to x p , e = 1, . . . , h if all attributes } { e K 1 , . . . , K n have the same rating scale X = x 1 , . . . , x h , or x pii , ei = 1, . . . , h i } { if each attribute K i has its own rating scale X i = xi1 , . . . , xih i , i = 1, . . . , n. In the reduced space of final ( attributes) N1 , . . . , Nl , the object Op will z)} a group of )vectors/tuples correspond to a vector/tuple p1 , . . . , z pl , or ( {( ) (





z p1 , . . . , z pl , . . . z p1 , . . . , z pl



. A vector/tuple z p1 , . . . , z pl , s = 1, . . . , t

5.1 Hierarchical Structuring Criteria and Attributes …

117

describes a version O p of the object Op . A component z pk is a value of the attribute e N k that is equal to z p , e = 1, . . . f if all attributes N1 , . . . , Nl have the same rating } { 1 e scale Z = z , . . . , z f , or z pkk , ek = 1, . . . , f k if each attribute N k has its own } { f rating scale Z k = z k1 , . . . , z k k , k = 1, . . . , l. When aggregating indicators, several characteristics K a , K b , . . . , K c are combined into one new characteristic (granule) L d , which we will call a composite indicator or composite criterion. Aggregation of indicators into a composite indicator is transformation (5.2) of attribute scales, which takes the form X a × X b × · · · × X c → Yd ,

(5.3)

} { where X i = xi1 , . . . , xih i is the scale of initial attribute K i , i = a, b, . . . , c, Yd = { 1 g } yd , . . . , yd d is the scale of composite indicator L d , |Yd | = gd < h a +h b +· · ·+h c = |X a × X b × · · · × X c |. We will consider the aggregation task (5.3) as a multicriteria classification task, wherein each gradation of the composite indicator scale is represented by different combinations of gradations of the initial attributes. A composite indicator scale can be formed in different ways. The simplest way to aggregate ordinal scales of verbal estimates is to combine gradations of the initial attributes into gradations of the composite indicator according to the following rule: on the scale of composite indicator, all high marks on scales of original attributes generate the single high mark, all middle marks generate the single middle mark, and all low marks generate the single low mark. The method of tuple stratification, similar to methods of vector stratification [6], is also quite simple. The tuple stratification method is based on cutting of multidimensional discrete attribute space with hyper-planes, which specify easily understandable sets of verbal scales of indicators. Each layer (stratum) consists of homogeneous tuple combinations of estimates on the scales X i of original attributes, for example, with fixed sums of grade indices, and represents any generalized grade on the scale Y d of composite indicator. A number of such cuts is by a DM/expert. The ∑ determined n h i + 1 − n. maximum possible number of layers is equal to i=1 Methods of verbal decision analysis are more complicated [3]. The ORCLASS method provides building a complete and consistent ordinal classification of multiattribute objects, which in this case are tuples of estimate grades on the scales of the original attributes that form a composite indicator. Decision classes are determined by their boundaries and correspond to gradations of the composite indicator scale. The ZAPROS method allows constructing a joint ordinal scale of a composite indicator from estimates of some original attributes and forming corresponding gradations on the scale.

118

5 Reducing Dimensionality of Attribute Space

Both methods operate on the set of all possible tuples of estimates in an attribute space, which is the Cartesian product of estimate grades on the indicator scales. A number of all possible combinations of marks is equal to .n i=1 hi . A number of generated classes is specified by a DM. Let us demonstrate with a model example how to reduce a dimensionality of attribute space using the methods mentioned above. It is necessary to generate the scale Y d of composite indicator L d from estimate gradations on the scales X a , X b , X c of initial indicators K a , K b , K c , that is, to perform the transformation X a ,{X b , X c and X a × X{b × X c → }Yd . Let each { rating scale } } Y d has {three verbal} grades: X a = a 0 , a 1 , a 2 , X b = b0 , b1 , b2 , X c = c0 , c1 , c2 , Yd = d 0 , d 1 , d 2 , where a 0 , b0 , c0 , d 0 are the high marks, a 1 , b1 , c1 , d 1 are the middle marks, a 2 , b2 , c2 , d 2 are the low marks. A set of estimates on the scale Y d of composite indicator L d is the direct product X a × X b × X c of the scales of attributes K a , K b , K c or a set of tuples (a ea , beb , cec ), ea , eb , ec = 0, 1, 2. Scales of the composite indicator L d , formed by a decision maker using the tuple stratification, ORCLASS and ZAPROS methods, are shown in Figs. 5.1, 5.2 and 5.3, respectively. For instance, the rating scale Y d of the composite indicator L d , which was constructed by the ORCLASS method, has the following verbal gradations consisting of combinations of gradations on scales of the initial indicators Ka , Kb, Kc:

a0b0c0 a2b0c0 a1b1c0 a2b1c0 a1b0c2 a2b2c0 a1b0c0 a0b2c0 a1b0c1 a2b0c1 a0b2c1 a2b0c2 a0b1c0 a0b0c2 a0b1c1 a1b2c0 a0b1c2 a0b2c2 a 0b 0c1 a 1b 1c1 0 d d1 Composite indicator Ld

a2b1c1 a2b2c1 a1b2c1 a2b1c2 a1b1c2 a1b2c2 a2b2c2 d2

Tuple layers

Fig. 5.1 Composite indicator scale built with the tuple stratification method

Upper boundary a0b0c0

Lower boundary a1b0c0 a0b1c0 a0b0c1 a0b0c1

d0

Upper boundary a2b0c0 a0b2c0 a0b0c2 a0b0c2 a0b1c1

Lower boundary a2b1c1 a1b2c1 a1b1c1 a1b1c1 a0b2c2

Upper boundary a 2b 2c1 a 2b 1c1 a 1b 2c2 a 1b 2c2

d1 Composite indicator Ld

Fig. 5.2 Composite indicator scale built with the ORCLASS method

Lower boundary a2b2c2

d2

Decision classes

5.1 Hierarchical Structuring Criteria and Attributes …

119

Fig. 5.3 Composite indicator scale built with the ZAPROS method

) ( ) ( ) ( ) ( d 0 − 0/high − a 0 , b0 , c0 , a 1 , b0 , c0 , a 0 , b1 , c0 , a 0 , b0 , c1 ; ) ( ) ( ) ( ) ( d 1 − 1/middle − a 2 , b0 , c0 , a 0 , b2 , c0 , a 0 , b0 , c2 , a 1 , b1 , c0 , ( 1 0 1) ( 0 1 1) ( 2 1 0) ( 2 0 1) ( 1 2 0) a ,b ,c , a ,b ,c , a ,b ,c , a ,b ,c , a ,b ,c , ( 1 0 2) ( 0 2 1) ( 0 1 2) ( 1 1 1) ( 2 2 0) a ,b ,c , a ,b ,c , a ,b ,c , a ,b ,c , a ,b ,c , ( 2 0 2) ( 0 2 2) ( 2 1 1) ( 1 2 1) ( 1 1 2) a ,b ,c , a ,b ,c , a ,b ,c , a ,b ,c , a ,b ,c ) ( ) ( ) ( ) ( d 2 − 2/low − a 2 , b2 , c1 , a 2 , b1 , c2 , a 1 , b2 , c2 , a 2 , b2 , c2 . Aggregation of indicators is executed progressively, step by step. At each stage, it is determined which initial characteristics should be combined into composite indicators, and which should be considered as independent final indicators. Verbal scales of composite indicators characterize desirable new features of compared objects and have a specific semantic content for a decision maker/expert. Consistently combining attributes, a DM/expert constructs acceptable intermediate and final indicators, which form a hierarchical scheme. Depending on specifics of the practical task being solved, the last level of an attribute aggregation tree may consist of several final indicators that implement the idea of multicriteria choice, or it may be a single integral index that implements the idea of holistic choice [8, 13, 14]. A tree of attribute aggregation is usually built from uniformed blocks, which are identified by a DM/expert, and, in fact, is a form of semantic interpretation and granulation of a DM preferences and/or expert knowledge. Each block of the hierarchical level r is a connected bipartite graph .r = (V , E), where V are vertices and E are arcs of graph (Fig. 5.4). The set V = X ∪Y of vertices consists of gradations of the original attribute scales X = X 1 ∪ · · · ∪ X n and gradations of the composite indicator scales Y = Y1 ∪ · · · ∪ Ym . The arcs E express rules according to which estimate tuples, that form gradations of the composite indicator scales, are arranged. Each block the level r includes some set of attributes and a single composite indicator. Estimate tuples of the initial attributes are classified objects. Gradations of the composite indicator scale are decision classes. In a block of the next hierarchical level r +1, composite indicators of the level r are considered new attributes. Tuples of the scale gradations of these composite indicators will now be new classified objects, and the scale gradations of new composite indicator will be new decision classes of the level r + 1 in the reduced attributes space. Solving a specific practical task, a DM/expert specifies the most appropriate collection of composite indicators, as well as a method or several methods for

120

5 Reducing Dimensionality of Attribute Space

X1 x11, …, x1e1 …, x1h1 X2 x21, …, x2e2 …, x2h2 …

Y1 y11, …, y1c1 …, y1g1 … … xi1, …, xiei

…, xihi

Xi xn1, …, xnen …, xnhn Xn

… ym1, …, ymcm …, ymgm Ym Fig. 5.4 Attribute aggregation block of the hierarchical level r

constructing their scales. For the problems of classification of multi-attribute objects, gradations on the scale of an integral indicator of the last hierarchical level are specified the required decision classes D1,..., Du as estimate collections for the initial attributes K 1 , . . . , K n . The HISCRA method includes the following steps. 10 . Generate a set K 1 , . . . , K n , n{≥ 2 of initial} indicators (attributes). 20 . Generate ordinal scales X i = xi1 , . . . , xih i , i = 1, . . . , n of initial indicators. 30 . Generate sets L 1 , . . . L m , . . . , N1 , . . . , Nl , l < m < n of composite indicators, which aggregate the attributes K 1 , . . . , K{n . } g 40 . Generate ordinal scales Y j = y 1j , . . . , y j j , j = 1, . . . , m, . . . , Z k = } { f z k1 , . . . , z k k , k = 1, . . . , l of composite indicators using different methods of attribute aggregation, for example, stratification of tuples W 1 ; multicriteria classification of tuples W 2 ; ranking of tuples W 3 . 50 . Build a hierarchical scheme of indicator aggregation, specifying which characteristics are combined into the intermediate indicators, and which ones are considered the final indicators.

5.2 Demonstrative Example: Method HISCRA Using the HISCRA method, let us decrease a dimensionality of the attribute space given by eight indicators (marks in the studied subjects): K 1 Mathematics, K 2 Physics, K 3 Chemistry, K 4 Biology, K 5 Geography, K 6 History, K 7 Literature, K 8

5.2 Demonstrative Example: Method HISCRA

121

Foreign{ language. All indicators K 1 , . . . , K 8 have the same five-point rating scale } X = x 1 , x 2 , x 3 , x 4 , x 5 , where x 1 is 1/very bad, x 2 is 2/bad, x 3 is 3/satisfactory, x 4 is 4/good, x 5 is 5/excellent. Estimate grades are ordered by preference as x 5 . x 4 . x 3 . x 2 . x 1 . The subject importance is the same wl = 1. Assume that a decision maker/expert believes that Mathematics and Physics are the most significant subjects for education, and therefore the original attributes K 1 and K 2 will be the final indicators. Other original attributes are aggregated into composite indicators in such way. The attributes K 3 Chemistry, K 4 Biology, K 5 Geography form a composite indicator M3 = (K 3 , K 4 , K 5 ) Natural disciplines. The attributes K 6 History, K 7 Literature, K 8 Foreign language form a composite indicator M4 = (K 6 , K 7 , K 8 ) Humanitarian disciplines. These composite indicators will also be final. To construct scales of the composite indicators, we use the method of tuple stratification, combining estimates on the scales of initial indicators, sums of which are equal to certain values, into corresponding gradations on the scale of a composite indicator M r , r = 3, 4. Verbal grades of the scale U r consist of the following estimate tuples: u r1 − 1/very bad, the mark sum is equal to 3, 4 − (1, 1, 1); (1, 1, 2), (1, 2, 1), (2, 1, 1); u r2 − 2/bad, the mark sum is equal to 5, 6, 7 − (1, 1, 3), (1, 2, 2), (1, 3, 1), (2, 1, 2), (2, 2, 1), (3, 1, 1); (1, 1, 4), (1, 2, 3), (1, 3, 2), (1, 4, 1), (2, 1, 3), (2, 2, 2), (2, 3, 1), (3, 1, 2), (3, 2, 1), (4, 1, 1); (1, 1, 5), (1, 2, 4), (1, 3, 3), (1, 4, 2), (1, 5, 1), (2, 1, 4), (2, 2, 3), (2, 3, 2), (2, 4, 1), (3, 1, 3), (3, 2, 2), (3, 3, 1), (4, 1, 2), (4, 2, 1), (5, 1, 1); u r3 − 3/satisfactory, the mark sum is equal to 8, 9, 10 − (1, 2, 5), (1, 3, 4), (1, 4, 3), (1, 5, 2), (2, 1, 5), (2, 2, 4), (2, 3, 3), (2, 4, 2), (2, 5, 1), (3, 1, 4), (3, 2, 3), (3, 3, 2), (3, 4, 1), (4, 1, 3), (4, 2, 2), (4, 3, 1), (5, 1, 2), (5, 2, 1); (1, 3, 5), (1, 4, 4), (1, 5, 3), (2, 2, 5), (2, 3, 4), (2, 4, 3), (2, 5, 2), (3, 1, 5), (3, 2, 4), (3, 3, 3), (3, 4, 2), (3, 5, 1), (4, 1, 4), (4, 2, 3), (4, 3, 2), (4, 4, 1), (5, 1, 3), (5, 2, 2), (5, 3, 1), (1, 4, 5), (1, 5, 4), (2, 3, 5), (2, 4, 4), (2, 5, 3), (3, 2, 5), (3, 3, 4), (3, 4, 3), (3, 5, 2), (4, 1, 5), (4, 2, 4), (4, 3, 3), (4, 4, 2), (4, 5, 1), (5, 1, 4), (5, 2, 3), (5, 3, 2), (5, 4, 1); u r4 − 4/good, the mark sum is equal to 11, 12, 13 − (1, 5, 5), (2, 4, 5), (2, 5, 4), (3, 3, 5),(3, 4, 4), (3, 5, 4), (4, 2, 5), (4, 3, 4), (4, 4, 3), (4, 5, 2), (5, 1, 5), (5, 2, 4), (5, 3, 3), (5, 4, 2), (5, 5, 1); (2, 5, 5), (3, 4, 5), (3, 5, 4), (4, 3, 5), (4, 4, 4), (4, 5, 3), (5, 2, 5), (5, 3, 4), (5, 4, 3), (5, 2, 2); (3, 5, 5), (4, 4, 5), (4, 5, 4), (5, 3, 5), (5, 4, 4), (5, 5, 3); u r5 − 5/excellent, the mark sum is equal to 14, 15 − (4, 5, 5), (5, 4, 5), (5, 5, 4), (5, 5, 5).

122

5 Reducing Dimensionality of Attribute Space

When the space dimensionality decreases, that is X 1 × . . . × X 8 → X 1 × X 2 × U3 × U4 , multi-attribute objects are specified as follows. In the space (of initial attributes K 1 , . . . , K 8 , the ) object Op corresponds to a vector/tuple x p1 , x p2 , x p3 , x p4 , x p5 , x p6 , x p7 , x p8 , p = 1, . . . , q. In the reduced space of (indicators K 1 , K 2), M3 , M4 , the object Op will correspond to a vector/tuple x p1 , x p2 , u p3 , u p4 . In the original space X 1 × · · · × X 8 of initial attributes, each object Op is a) ( point of the eight-dimensional space. A length of each vector/tuple x p1 , . . . , x p8 is equal to 8, its components can take one of five values of the estimate x e . The total number of all possible combinations of components (object representations) is equal to 58 = 390 625. It is extremely difficult to operate with such amount of multi-attribute objects. In the reduced space X 1 × X 2 ×U3 ×U4 of new indicators,(each object Op is a )point of the four-dimensional space. A length of each vector/tuple x p1 , x p2 , u p3 , u p4 will be equal to 4. The total number of all possible combinations of components (object representations) will be equal to 54 = 625, which is 625 times lower than 390 625, but still large. At the same time, almost all vectors/tuples, and hence objects, will remain formally incomparable. Therefore, again it is not very convenient to work with such final indicators. Let us further reduce a dimensionality of attribute space and combine four indicators K 1 , K 2 , M 3 , M 4 into a single final integral index N0 = { (K 1 , K 2 , M3 , M } 4) Academic score. We will generate a verbal rating scale Z 0 = z 01 , z 02 , z 03 , z 04 , z 05 of the index N 0 as above, using the tuple stratification method. For brevity, we will not write out all combinations of vectors/tuples that form corresponding grades on the scale Z 0 , but only indicate the rule for their formation. The scale Z 0 has the following gradations: z 01 − 1/very bad, the mark sum is equal to 4, 5, 6; z 02 − 2/bad, the mark sum is equal to 7, 8, 9; z 03 − 3/satisfactory, the mark sum is equal to 10, 11, 12, 13, 14; z 04 − 4/good, the mark sum is equal to 15, 16, 17; z 05 − 5/excellent, the mark sum is equal to 18, 19, 20. The marks of ten pupils O1 , . . . , O10 in terms of indicators of different aggregation degrees are shown in Table 5.1. Each gradation on the scale Z 0 of the integral index N 0 identifies the pupil academic score, according to which pupils can now be compared. Thus, as follows from Table 5.1, the pupils O1 , O6 , O10 have the excellent academic score, pupils O4 , O5 , O8 have the good academic score, pupil O7 has the satisfactory academic score, and pupils O2 , O3 , O9 have the poor academic score.

5.3 Hierarchical Structuring Criteria and Attributes …

123

Table 5.1 Pupil marks in the studied subjects K 1 —K 8 and disciplines M 3 , M 4 as vectors/tuples, pupil academic scores N 0 O

X1

X2

X3

X4

X5

X6

X7

X8

X1

X2

U3

U4

Z0

O1

4

5

4

5

4

5

4

5

4

5

4

5

5

O2

4

1

2

1

3

2

2

2

4

1

2

2

2

O3

1

1

3

1

4

1

1

4

1

1

3

2

2

O4

5

3

2

4

4

5

4

5

5

3

3

5

4

O5

4

4

4

4

4

5

4

4

4

4

4

4

4

O6

5

5

4

4

4

5

5

4

5

5

4

5

5

O7

4

1

2

3

3

3

1

2

4

1

3

2

3

O8

4

5

4

2

3

4

5

3

4

5

3

4

4

O9

3

2

3

1

3

3

2

2

3

2

2

2

2

O10

5

5

4

5

3

5

5

4

5

5

4

5

5

5.3 Hierarchical Structuring Criteria and Attributes: Modified Method HISCRA-M We can increase validity of solution to the multicriteria choice task of a large dimensionality if we can compare the results of objects’ evaluation obtained for different final indicators. Thus, a decision maker/expert gets opportunity to consider the problem from different points of view, analyze the results, choose the most acceptable result. The modified method HISCRA-M, as well as the HISCRA method, is intended to aggregate object attributes and reduce a dimensionality of space where multiattribute objects are represented by vectors/tuples [8, 9]. In the HISCRA-M method, contrary to the HISCRA method, we build not one, but several hierarchical schemes of indicators, aggregating initial attributes in different ways. Each attribute aggregation tree can be considered an individual judgment of any outside expert about generalized features of the objects evaluated. Thus, a collective evaluation of objects given by many experts appears, presence of which transforms the task under consideration into a collective choice task. Unification of procedure for formatting composite indicators at all hierarchical levels of the aggregation tree makes it possible to significantly simplify the task of reducing an attribute space dimensionality. At each stage of aggregation, we recommend to combine a small number (2 or 3, rarely 4) of indicators, that have the same type of scales, into a composite indicator, and to increase a number of aggregation stages. In the HISCRA-M method, we use two principal data transformations: reduction of attribute scales and aggregation of attributes. Reduction of attribute scale is a relatively simple transformation aimed at diminishing a number of gradations on the attribute scale. For this, several values of any object characteristic are combined into a single new gradation of the same characteristic.

124

5 Reducing Dimensionality of Attribute Space

Transition from the original characteristic scales to scales with a reduced number of gradations is transformation (5.2) of the attribute space, which has the form: X 1 × · · · × X n → Q1 × · · · × Qn ,

(5.4)

} } { { where X i = xi1 , . . . , xih i is the initial scale, Q i = qi1 , . . . , qidi is the shortened scale of the attribute K i , |Q i | = di < h i = |X i |, i = 1, . . . , n. When forming the shortened scales of attributes (5.4), it is desirable that they consist of a small number (2–4) of gradations, which have a very definite content for a decision maker/expert. When reducing the space dimensionality, a representation of multi-attribute objects is transformed as follows. In the space of attributes K 1 , . . . , K(n with original ) 1, . . . , q corresponds to scales, the object O p , p = {( ) ( a vector/tuple )} x p1 , . . . , x pn ,





or a group of vectors/tuples x p1 , . . . , x pn , . . . , x p1 , . . . , x pn . A vector/tuple ( )

x p1 , . . . , x pn , s = 1, . . . , t describes a version O p of the object Op . A component

e

x pi is a value of the attribute K i that is equal to x p , e = 1, . . . , h if all attributes { } e K 1 ,…,K n have the same rating scale X = x 1 , . . . , x h , or x pii , ei = 1, . . . , h i if } { each attribute K i has its own rating scale X i = xi1 , . . . , xih i , i = 1, . . . , n. In the space of attributes K 1 ,(. . . , K n with) shortened scales, the object q p1 , . . . , q pn , or ( vector/tuple )} a group of )vectors/tuples O p {( corresponds) to a (



q p1 , . . . , q pn , . . . , q p1 , . . . , q pn



. A vector/tuple q p1 . . . qnn , s = 1 . . . , t

describes a version O p of the object Op . A component q pi is a value of the attribute o K i that is equal to q p , o = 1, . . . , d if all attributes K 1 , . . . , K n have the same { 1 } o rating scale Q = q , . . . , q d , or q pii , oi = 1, . . . , di if each attribute K i has its } { own rating scale Q i = qi1 , . . . , qidi . Aggregation of several attributes K a , K b , . . . , K c into a composite indicator L d is the transformation X a × X b × · · · × X c → Yd (5.3) of attributes’ scales, which is considered as a multicriteria classification task. When aggregating several attributes, ways of forming the scales of composite indicators and representing multi-attribute objects by vectors/tuples are the same as in the HISCRA method. The HISCRA-M method includes the following steps. 10 . Generate a set K 1 , . . . , K n , n{≥ 2 of initial} indicators (attributes). 20 . Generate ordinal scales X i = xi1 , . . . , xih i , i = 1, . . . , n of initial indicators. 30 . Generate sets L 1 , . . . , L m , . . . , N1 , . . . , Nl , l < m < n of composite indicators, which aggregate the attributes{ K 1 , . . . , K n}. g 40 . Generate ordinal scales Y j = y 1j , . . . , y j j , j = 1, . . . , m, . . . , Z k = } { f z k1 , . . . , z k k , k = 1, . . . , l of composite indicators using various ways for their construction, including reduction of attribute scales and aggregation of indicators, for example, stratification of tuples W 1 ; multicriteria classification of tuples W 2 ; ranking of tuples W 3 .

5.4 Demonstrative Example: Method HISCRA-M

125

50 . Build several hierarchical schemes of indicator aggregation using various ways to combine attributes and/or combinations of methods to generate composite indicators and their scales at different levels of the hierarchy.

5.4 Demonstrative Example: Method HISCRA-M Using the HISCRA-M method, let us decrease a dimensionality of the attribute space given by eight indicators (marks in the studied subjects): K 1 Mathematics, K 2 Physics, K 3 Chemistry, K 4 Biology, K 5 Geography, K 6 History, K 7 Literature, K 8 Foreign K i , i = 1, . . . , 8 has its own five-point rating scale } { language. Each indicator X i = xi1 , xi2 , xi3 , xi4 , xi5 , where x i 1 is 1/very bad, xi2 is 2/bad, xi3 is 3/satisfactory, xi4 is 4/good, xi5 is 5/excellent. Estimate grades are ordered by preference as xi5 . xi4 . xi3 . xi2 . xi1 . The subject importance is the same wl = 1, semi-annual marks are equivalent c = c = 1.

Represent the object Op and its versions O p , O p , p = 1, . . . , (10 (annual and ) semi-annual marks of)pupils in the subjects) by vectors/ tuples x p = x p1 , . . . , x p8 , (



x p = x p1 , . . . , x p8 , s = 1, 2, which are points of the eight-dimensional space X 1 × … × X 8 . As above, the total number of all possible combinations of vector/tuple components (representations of an object or version) to 58 }= 390 625. { 1 is2 equal 3 4 5 Replace the five-point attribute scales X = x , x , x , x i i i i , x i with the short}i { ened three-point scales Q i = qi0 , qi1 , qi2 . A decision maker specified that qi0 is 0/high mark, which includes the marks xi5 − 5/ excellent and xi4 − 4/ good; qi1 is 1/middle mark, which corresponds to the mark xi3 − 3/ satisfactory; qi 2 is 2/low mark, which includes the marks xi2 − 2/ bad and xi1 − 1/ very bad. Since the original estimates were ordered by preference as xi5 . xi4 . xi3 . x 2 . xi1 , the new estimates will be ordered in the same way: qi0 . qi1 . qi2 . Now, in the new space Q 1 ×· · ·× Q 8 , the object are represented ) ( Op and its versions ( )

by vectors/tuples q p = q p1 , . . . , q ps , q p = q p1 , . . . , q p8 , s = 1, 2. A length of each vector/tuple is equal to 8 as before, but its components can take only one of three values qioi . The total number of all possible representations of an object and versions by components of vectors/tuples will be equal to 38 = 6561, which is almost 60 times lower than 390 625, but still quite large. However, almost all vectors/tuples, and therefore objects, will remain incomparable. The semi-annual marks of ten pupils O1 , . . . , O10 in the subjects with five-point and three-point rating scales are given in Table 5.2. To represent objects in the reduced attribute spaces, we shall construct several hierarchical trees of indicators with different schemes for aggregation of initial characteristics, taking into account preferences of decision maker and/or expert knowledge. Transition (5.4) from the scales X i to the shortened scales Qi will be considered the zero aggregation scheme. For simplicity, let us assume that a scale of any new indicator has three grades of estimates, like the scale Qi . We shall build scales of all intermediate and final indicators using the method of tuple stratification. Each

126

5 Reducing Dimensionality of Attribute Space

Table 5.2 Pupil marks in the studied subjects K 1 —K 8 as vectors/tuples of attributes (original and shortened scales) X1

X2

X3

X4

X5

X6

X7

X8

Q1

Q2

Q3

Q4

Q5

Q6

Q7

Q8

O1 O1

4 5

5 5

4 5

5 5

4 4

5 4

4 4

5 5

0 0

0 0

0 0

0 0

0 0

0 0

0 0

0 0

O2 O2

4 3

1 2

2 1

1 1

3 4

2 3

2 3

2 2

0 1

2 2

2 2

2 2

1 0

2 1

2 1

2 2

O3 O3

1 1

1 2

3 3

1 1

4 5

1 2

1 1

4 3

2 2

2 2

1 1

2 2

0 0

2 2

2 2

0 1

O4 O4

5 4

3 4

2 3

4 5

4 4

5 5

4 3

5 4

0 0

1 0

2 1

0 0

0 0

0 0

0 1

0 0

O5 O5

4 5

4 5

4 3

4 4

4 4

5 4

4 5

4 4

0 0

0 0

0 1

0 0

0 0

0 0

0 0

0 0

O6 O6

5 4

5 5

4 4

4 4

4 4

5 4

5 5

4 5

0 0

0 0

0 0

0 0

0 0

0 0

0 0

0 0

O7 O7

4 3

1 2

2 1

3 4

3 2

3 4

1 2

2 3

0 1

2 2

2 2

1 0

1 2

1 0

2 2

2 1

O8 O8

4 5

5 4

4 5

2 3

3 4

4 5

5 4

3 4

0 0

0 0

0 0

2 1

1 0

0 0

0 0

1 0

O9 O9

3 4

2 3

3 2

1 2

3 2

3 3

2 3

2 2

1 0

2 1

1 2

2 2

1 2

1 1

2 1

2 2

O10 O10

5 3

5 4

4 3

5 4

3 2

5 4

5 2

4 4

0 1

0 0

0 1

0 0

1 2

0 0

0 2

0 0

O

gradation on the scale of a composite indicator includes combinations of gradations on the scales of initial indicators that form this composite indicator, sum of indices of which is equal to a certain specified number. According to the first aggregation scheme (Fig. 5.5a), all initial attributes } { K 1 , . . . , K 8 with scales Q i = qi0 , qi1 , qi2 are combined into 4 composite indicators, which are considered final. The attributes K 1 Mathematics and K 2 Physics form a composite indicator L 1 = (K 1 , K 2 ) Physical–mathematical disciplines. The attributes K 3 Chemistry and K 4 Biology form a composite indicator L 2 = (K 3 , K 4 ) Chemical-Biological disciplines. The attributes K 5 Geography and K 6 History form a composite indicator L 3 = (K 5 , K 6 ) Social disciplines. The attributes K 7 Literature and K 8 Foreign language form a composite indicator L 4 = (K 7 , K 8 ) Philological disciplines. The composite indicators L 1 , . . . , L 4 have rating scales } { 0 1 2 Y j = y j , y j , y j , j = 1, 2, 3, 4 with the following verbal gradations: ) ( y 0j − 0/ high, the sum of grade indices is equal to 0 − qa0 , qc0 ; y 1 − 1/ middle, the sum of grade indices is equal to 1, 2 − ( j 1 0) ( 0 1) ( 2 0) ( 0 2) ( 1 1) qa , q c , qa , q c , qa , q c , qa , q c , qa , q c ; ) ( ) ( ) ( y 2j −2/ low, the sum of grade indices is equal to 3, 4− qa2 , qc1 , qa1 , qc2 , qa2 , qc2 .

5.4 Demonstrative Example: Method HISCRA-M

K1

L1

K2 K3

L2

K4 K5

L3

K6 K7

L4

K8 a

K1

K1

L1

K2

M1

K3

L2

K4 K5 K7

L4

K8 b

K3

K3

L2 N1 L3

K6

M2

K7

L4

K8 c

L1

K2

M1

K5 M2

K1

L1

K2 K4

L3

K6

127

L2

K4

N2

K5

L3

K6 K7

L4

K8 d

Fig. 5.5 Schemes for aggregation of initial attributes into composite indicators: a the first scheme, b second scheme, c third scheme, and d fourth scheme

Here a = 1, c = 2 for j = 1; j = 2; a = 5, c = 6 for j = 3; a = 7, c = 8 for j = 4. According to the second aggregation scheme (Fig. 5.5b), the first stage is the same as in the first scheme. At the next stage, the indicators L 1 Physical–mathematical disciplines and L 2 Chemical-Biological disciplines form a composite indicator M1 = (L 1 , L 2 ) Natural disciplines. The indicators L 3 Social disciplines and L 4 Philological disciplines form a composite indicator M2 = (L 3 , L 4 ) Humanitarian disciplines. The indicators M 1 , M 2 are considered final indicators having rating scales composite } { Ur = u r0 , u r1 , u r2 , r = 1, 2 with verbal gradations: ) ( u r0 − 0/high, the sum of grade indices is equal to 0 − yb0 , yd0 ; u( r1 − ) 1/middle, ( ) ( the )sum of grade indices is equal to 1, 2 − yb1 , yd0 , yb0 , yd1 , yb2 , yd0 ; ) ( ) ( ) ( u r2 − 2/low, the sum of grade indices is equal to 3, 4 − yb2 , yd1 , yb1 , yd2 , yb2 , yd2 . Here b = 1, d = 2 for r = 1;b = 3, d = 4 for r = 2. According to the third aggregation scheme (Fig. 5.5c), the first and second stages are the same as in the second scheme. At the next stage, the indicators M 1 Natural disciplines and M 2 Humanitarian disciplines form a final { integral }index N1 = (M1 , M2 ) Academic score, which has a rating scale Z 1 = z 10 , z 11 , z 12 with verbal grades:

128

5 Reducing Dimensionality of Attribute Space

) ( z 10 − 0/ high, the sum of grade indices is equal to 0 − u 01 , u 02 ; 1 ) ( the)( sum )of( grade ) indices is equal to 1, 2 − (z 1 1 − 0 )1/( 0middle, u 1 , u 2 , u 1 , u 12 , u 21 , u 02 u 01 , u 22 , u 11 , u 12 ; ) ( ) ( ) ( z 12 −2/ low, the sum of grade indices is equal to 3, 4− u 21 , u 12 , u 11 , u 22 , u 21 , u 22 . According to the fourth aggregation scheme (Fig. 5.5d), the first stage is the same as in the first scheme. At the next stage, indicators L 1 Physical–mathematical disciplines, L 2 Chemical-Biological disciplines, L 3 Social disciplines and L 4 Philological disciplines are directly combined into a final integral index } N2 = (L 1 , L 2 , L 3 , L 4 ) { Academic score, which has a rating scale Z 1 = z 10 , z 11 , z 12 with verbal grades: z2 0 —0/high, the sum of grade indices is equal to 0, 1—(y1 0 , y2 0 , y3 0 , y4 0 ), (y1 1 , y2 0 , y3 0 , y4 0 ), (y1 0 , y2 1 , y3 0 , y4 0 ), (y1 0 , y2 0 , y3 1 , y4 0 ), (y1 0 , y2 0 , y3 0 , y4 1 ); z2 1 —1/middle, the sum of grade indices is equal to 2, 3, 4—(y1 2 , y2 0 , y3 0 , y4 0 ), (y1 0 , y2 2 , y3 0 , y4 0 ), (y1 0 , y2 0 , y3 2 , y4 0 ), (y1 0 , y2 0 , y3 0 , y4 2 ), (y1 1 , y2 1 , y3 0 , y4 0 ), (y1 1 , y2 0 , y3 1 , y4 0 ), (y1 1 , y2 0 , y3 0 , y4 1 ), (y1 0 , y2 1 , y3 1 , y4 0 ), (y1 0 , y2 1 , y3 0 , y4 1 ), (y1 0 , y2 0 , y3 1 , y4 1 ); (y1 2 , y2 1 , y3 0 , y4 0 ), (y1 2 , y2 0 , y3 1 , y4 0 ), (y1 2 , y2 0 , y3 0 , y4 1 ), (y1 1 , y2 2 , y3 0 , y4 0 ), (y1 1 , y2 0 , y3 2 , y4 0 ), (y1 1 , y2 0 , y3 0 , y4 2 ), (y1 0 , y2 2 , y3 1 , y4 0 ), (y1 0 , y2 2 , y3 0 , y4 1 ), (y1 0 , y2 1 , y3 2 , y4 0 ), (y1 0 , y2 1 , y3 0 , y4 2 ), (y1 0 , y2 0 , y3 2 , y4 1 ), (y1 0 , y2 0 , y3 1 , y4 2 ), (y1 1 , y2 1 , y3 1 , y4 0 ), (y1 1 , y2 1 , y3 0 , y4 1 ), (y1 1 , y2 0 , y3 1 , y4 1 ), (y1 0 , y2 1 , y3 1 , y4 1 ); (y1 2 , y2 2 , y3 0 , y4 0 ), (y1 2 , y2 0 , y3 2 , y4 0 ), (y1 2 , y2 0 , y3 0 , y4 2 ), (y1 0 , y2 2 , y3 2 , y4 0 ), (y1 0 , y2 2 , y3 0 , y4 2 ), (y1 0 , y2 0 , y3 2 , y4 2 ), (y1 2 , y2 1 , y3 1 , y4 0 ), (y1 2 , y2 1 , y3 0 , y4 1 ), (y1 2 , y2 0 , y3 1 , y4 1 ), (y1 1 , y2 2 , y3 1 , y4 0 ), (y1 1 , y2 2 , y3 0 , y4 1 ), (y1 1 , y2 1 , y3 2 , y4 0 ), (y1 1 , y2 1 , y3 0 , y4 2 ), (y1 1 , y2 0 , y3 2 , y4 1 ), (y1 1 , y2 0 , y3 1 , y4 2 ), (y1 0 , y2 2 , y3 1 , y4 1 ), (y1 0 , y2 1 , y3 2 , y4 1 ), (y1 0 , y2 1 , y3 1 , y4 2 ), (y1 1 , y2 1 , y3 1 , y4 1 ); z2 2 —2/low, the sum of grade indices is equal to 5, 6, 7, 8—(y1 2 , y2 2 , y3 1 , y4 0 ), (y1 2 , y2 2 , y3 0 , y4 1 ), (y1 2 , y2 1 , y3 2 , y4 0 ), (y1 2 , y2 1 , y3 0 , y4 2 ), (y1 2 , y2 0 , y3 2 , y4 1 ), (y1 2 , y2 0 , y3 1 , y4 2 ), (y1 1 , y2 2 , y3 2 , y4 0 ), (y1 1 , y2 2 , y3 0 , y4 2 ), (y1 1 , y2 0 , y3 2 , y4 2 ), (y1 0 , y2 2 , y3 2 , y4 1 ), (y1 0 , y2 2 , y3 1 , y4 2 ), (y1 0 , y2 1 , y3 2 , y4 2 ), (y1 2 , y2 1 , y3 1 , y4 1 ), (y1 1 , y2 2 , y3 1 , y4 1 ), (y1 1 , y2 1 , y3 2 , y4 1 ), (y1 1 , y2 1 , y3 1 , y4 2 ); (y1 2 , y2 2 , y3 2 , y4 0 ), (y1 2 , y2 2 , y3 0 , y4 2 ), (y1 2 , y2 0 , y3 2 , y4 2 ), (y1 0 , y2 2 , y3 2 , y4 2 ), (y1 2 , y2 2 , y3 1 , y4 1 ), (y1 2 , y2 1 , y3 2 , y4 1 ), (y1 2 , y2 1 , y3 1 , y4 2 ), (y1 1 , y2 2 , y3 2 , y4 1 ), (y1 1 , y2 2 , y3 1 , y4 2 ), (y1 1 , y2 1 , y3 2 , y4 2 ); (y1 2 , y2 2 , y3 2 , y4 1 ), (y1 2 , y2 2 , y3 1 , y4 2 ), (y1 2 , y2 1 , y3 2 , y4 2 ), (y1 1 , y2 2 , y3 2 , y4 2 ); (y1 2 , y2 2 , y3 2 , y4 2 ). When aggregating indicators, multi-attribute objects are specified as follows.

Versions ( O p , s = 1, . . ). , t of the object Op are described by vectors/tuples







y p = y p1 , y p2 , y p3 , y p4 in the reduced space Y1 × Y2 × Y3 × Y4 of aggregated ) (

= u , u indicators L 1 , L 2 , L 3 , L 4 ; or by vectors/tuples u p p1 p2 in the space U 1 × U 2 of aggregated indicators M 1 , M 2 . The total number of all possible combinations of components (object representations), for the first aggregation scheme, is equal to 54 = 625, which is 625 times lower than 390 625; for the second aggregation scheme, is equal to 54 = 25, which is even more significantly lower. When constructing a

single integral index N k , k = 1, 2, each version O p of the object Op is characterized ek by the gradation z k , ek = 0, 1, 2 of the rating scale Z k .

5.4 Demonstrative Example: Method HISCRA-M

129

Table 5.3 Pupil marks in the studied disciplines L 1 , L 2 , L 3 , L 4 , M1 , M2 as vectors/tuples, pupil academic scores N 1 , N 2 (first, second, third and fourth aggregation schemes) L1

L2

L3

L4

M1

M2

N1

N2

O1 O1

0 0

0 0

0 0

0 0

0 0

0 0

0 0

0 0

O2 O2

1 2

2 2

2 1

2 2

2 2

2 2

2 2

2 2

O3 O3

2 2

2 2

1 1

1 2

2 2

1 2

2 2

2 2

O4 O4

1 0

1 1

0 0

0 1

1 1

0 1

1 1

1 1

O5 O5

0 0

0 1

0 0

0 0

0 1

0 0

0 1

0 0

O6 O6

0 0

0 0

0 0

0 0

0 0

0 0

0 0

0 0

O7 O7

1 2

2 1

1 2

2 2

2 2

2 2

2 2

2 2

O8 O8

0 0

1 1

1 0

1 0

1 1

1 0

1 1

1 0

O9 O9

2 1

2 2

1 2

2 2

2 2

2 2

2 2

2 2

O10 O10

0 1

0 1

1 1

0 1

0 1

1 1

1 1

0 1

O

The semi-annual marks of ten pupils O1 , . . . , O10 in the disciplines L 1 , L 2 , L 3 , L 4 , M1 , M2 and academic scores N 1 , N 2 with three-point scales are presented in Table 5.3 for different schemes of indicator aggregation. As follows from this Table, according to the first, second and fourth schemes, the pupils O1 , O5 , O6 have the high academic score, pupils O4 , O8 , O10 have the middle academic score, pupils O2 , O3 , O7 , O9 have the low academic score. According to the third scheme, the pupils O1 , O6 have the high academic score, pupils O4 , O5 , O8 , O10 have the middle academic score, pupils O2 , O3 , O7 , O9 have the low academic score. Thus, for different aggregation schemes, we get groups of pupils with slightly different academic scores. Indicators can also be aggregated in other ways. For example, the attributes K 1 Mathematics, K 2 Physics, K 3 Chemistry, K 4 Biology form a composite indicator M4 = (K 1 , K 2 , K 3 , K 4 ) Natural disciplines. The attributes K 5 Geography, K 6 History, K 7 Literature, K 8 Foreign language form a composite indicator M5 = (K 5 , K 6 , K 7 , K 8 ) Humanitarian disciplines. The composite indicators M 4 and M 5 can either be considered as final, or combined further into a single integral index N3 = (M4 , M5 ) Academic score. Other options for aggregating indicators are possible.

130

5 Reducing Dimensionality of Attribute Space

5.5 Shortening Criteria and Attributes: Method SOCRATES The method for shortening criteria and attributes (SOCRATES) is intended to aggregate object attributes and reduce a dimensionality of space where multi-attribute objects are represented by multisets of numerical and/or verbal characteristics [9, 15, 16]. In these cases, the task (5.1) is as follows: X 1 ∪ · · · ∪ X n → Y1 ∪ · · · ∪ Ym → · · · → Z 1 ∪ · · · ∪ Z l

(5.5)

The dimensionality of attribute space is defined as the cardinality of a hyperscale, that is a union of attribute gradations, which are elements of multisets. In the SOCRATES method, as well as in the HISCRA-M method, many of initial numerical and/or verbal characteristics of objects are aggregated into a single integral index or several composite indicators with small scales of qualitative estimates. We build several hierarchical schemes of indicators, aggregating the initial attributes in different ways, which are considered individual judgments of some outside experts. Each attribute aggregation tree expresses an opinion of one of the experts about generalized features of the objects evaluated. Presence of several individual estimates of objects transforms the task under consideration into a collective choice task. In the SOCRATES method, as well as in the HISCRA-M method, we use two principal data transformations: reduction of attribute scale and aggregation of attributes. In practical situations, we recommend to construct several different schemes for combining attributes, which include procedures for reducing of attribute scales and aggregating attributes. This decreases an impact of each specific procedure and increases a validity of the results obtained. Representation of multi-attribute objects by multisets, unification of procedures for formatting hierarchical levels of the aggregation tree, reduction of numbers of attributes, indicators, and their scales—all of these make it possible to simplify the solution to applied tasks of multicriteria choice and explain the results obtained. Reduction of attribute scale is transformation (5.5) of the space, in which several values of any characteristic of an object are combined into a single new gradation of the same characteristic. Transition from the original attribute scales to the scales with a reduced number of gradations has the form: X 1 ∪ · · · ∪ X n → Q1 ∪ · · · ∪ Qn

(5.6)

} } { { where X i = xi1 , . . . , xih i is the original scale, Q i = qi1 , . . . , qidi is the shortened scale of the attribute K i , |Q i | = di < h i = |X i |, i = 1, . . . , n. When reducing the attribute scales, representation of multi-attribute objects is transformed as follows. In the space of attributes K 1 , . . . , K n with original scales, the object Op , p = 1, . . . , q corresponds to a multiset of estimates (3.2)

5.5 Shortening Criteria and Attributes: Method SOCRATES

131

( ) } { ( ) ( ) ( ) A p = k A p x11 ◦ x11 , . . . , k A p x1h 1 ◦ x1h 1 , . . . ; k A p xn1 ◦ xn1 , . . . , k A p xnh n ◦ xnh n (5.7) over the set X 1 ∪ · · · ∪ X n of original scale grades. Define the multiset Ap as a

sum of multisets A p that describe versions O p , s = 1, . . . , t of the object: A p = ( ei ) ∑ ( ei ) ∑ = s c k A p xi . Let us use s c A p , where multiplicity function is k A p x i the properties of operations on multisets and rewrite expression (5.7) as the following sum: A p = A p1 + · · · + A pn ( ) } { { ( ) ( ) ( ) } = k A p x11 ◦ x11 , . . . , k A p x1h 1 ◦ x1h 1 + · · · + k A p xn1 ◦ xn1 , . . . , k A p xnh n ◦ xnh n =

h1 ∑ {

hn ∑ ( ) } { ( ) } k A p x1e1 ◦ x1e1 + · · · + k A p xnen ◦ xnen .

e1 =1

(5.8)

en =1

When reducing the attribute scales, several grades xiea , xieb , . . . , xiec of the original } {

scale X i = xi1 , . . . , xih i

of the attribute K i are combined into a single gradation } { of the shortened scale Q i = qi1 , . . . , qidi . In the space of attributes K 1 , . . . , K n with shortened scales Q 1 , . . . , Q n , the object Op will correspond to a multiset

qioi

( ) } { ( ) ( ) ( ) B p = k Bp q11 ◦ q11 , . . . , k B p q1d1 ◦ q1d1 , . . . ; k B p qn1 ◦ qn1 , . . . , k B p qndn ◦ qndn (5.9) over the set Q 1 ∪ · · · ∪ Q n of shortened scale grades. The multiset Bp (5.9) can also be written in an equivalent form: B p =B p1 + · · · + B pn ( ) } { { ( )1 ( ) = k B p q11 ◦ q11 , . . . , k B p q1d1 ◦q1d1 + · · · + k B p (qn1 ◦ qn1 , . . . , k B p qndn ◦qndn } =

d1 ∑ { o1 =1

dn ∑ ( ) } { ( ) } k B p q1o1 ◦q1o1 + · · · + k B p qnon ◦qnon .

(5.10)

on =1

The multiplicity of an element qioi , oi = 1, . . . , di of the multiset Bp (5.9) } or { di 1 oi (5.10), which corresponds to a grade qi of shortened scale Q i = qi , . . . , qi , is determined by a rule: ( ) ( ) ( ) ( ) k Bp qioi = k A p xiea + k A p xieb + · · · + k A p xiec .

(5.11)

132

5 Reducing Dimensionality of Attribute Space

Here multiplicities of elements xiea , xie , . . . , xiec of the multiset } (5.8), { Ap (5.7) or hi 1 which correspond to combined grades of original scale X i = xi , . . . , xi of the attribute K i , are summarized. Aggregation of attributes is transformation (5.5) of the space in which a number of attributes decreases. For this, several attributes L a , L b , . . . , L c are combined into a single new composite indicator (granule) N k . This transformation looks as follows: Ya ∪ Yb ∪ . . . ∪ Yc → Z k ,

(5.12)

} { g where Y j = y 1j , . . . , y j j is a scale of the initial attribute L j , j = a, b, . . . , c, Z k = } { f z k1 , . . . , z k k is a scale of the composite indicator N k , k = 1, . . . , l, |Z k | = f k < ga + gb + · · · + gc = |Ya ∪ Yb ∪ ... ∪ Yc . Sets of composite indicators and their scales can be formed in different ways of granulation, which make it possible to represent each gradation of the composite indicator scale as a combination of estimate gradations of the initial characteristics. We recommend to combine 2–4 initial attributes into a composite indicator with a small scale, including 2–4 gradations. In practical tasks, it is convenient to form scales of the combined attributes and composite indicators so that they have the same number of grades, that is ga = gb = · · · = gc = f k = d, and each gradation of the composite indicator scale consists of similar gradations of the combined attributes scales. Representation of multi-attribute objects during attribute aggregation is transformed as follows. In the space of initial attributes L 1 , . . . , L m , we associate the object Op , p = 1, . . . , q with a multiset ( ) ( ) ( ) } { ( ) I p = k I p y11 ◦ y11 , . . . , k I p y1d ◦ y1d ; . . . ; k I p ym1 ◦ ym1 , . . . , k I p ymd ◦ ymd (5.13) over the set { } Y1 ∪ · · · ∪ Ym of scale gradations, where all scales Y j = d 1 y j , . . . , y j , j = 1, . . . , m have the same number d of grades. Since the element order in a multiset is insignificant, we rewrite expression (5.13) as a sum: I p =I p1 + · · · + I pd ( ) } { ( ) ( ) } { ( ) = k I p y11 ◦ y11 , . . . , k I p ym1 ◦ ym1 + · · · + k I p y1d ◦ y1d , . . . , k I p ymd ◦ ymd =

m m ∑ ∑ { ( 1) } { ( d) } k I p y j ◦ y 1j + · · · + k I p y j ◦ y dj . j=1

(5.14)

j=1

In the reduced space of composite indicators N 1 ,…,N l , the object Op will correspond to a multiset

5.6 Demonstrative Example: Method SOCRATES

133

( ) ( ) ( ) } { ( ) J p = k J p z 11 ◦ z 11 , . . . , k J p z 1d ◦ z 1d ; . . . ; k J p zl1 ◦ zl1 , . . . , k J p zld ◦ zld (5.15) } { over the set Z 1 ∪· · ·∪ Z l of scale gradations, where all scales Z k = z k1 , . . . , z kd , k = 1, . . . , l have the same number d of grades. The multiset J p (5.15) can also be written in an equivalent form: J p = J p1 + · · · + J pd { ( ) ( ) } { ( ) ( ) } = k J p z 11 ◦ z 11 , . . . , k J p zl1 ◦ zl1 + · · · + k J p z 1d ◦ z 1d , . . . , k J p zld ◦ zld =

l l ∑ ∑ { ( 1) 1} { ( d) d} k J p zk ◦ zk + · · · + k J p z k ◦z k . k=1

(5.16)

k=1

The multiplicity of an element z ke , e = 1, . . . , d of the multiset J p (5.15) or (5.16), which corresponds to the grade z ke of the scale Z k of the composite indicator N k , is determined by a rule: ( ) ( ) ( ) ( ) k I p z ke = k I p yae + k I p ybe + · · · + k I p yce .

(5.17)

Here multiplicities of elements yae , ybe , . . . , yce of the multiset I p (5.13) or (5.14), which correspond to grades yae , ybe , . . . , yce of scales Ya , Yb , . . . , Yc of the combined attributes L a , L b , . . . , L c are summarized. The SOCRATES method includes the following steps. 10 . Generate a set K 1 , . . . , K n , n{≥ 2 of initial} indicators (attributes). 20 . Generate ordinal scales X i = xi1 , . . . , xih i , i = 1, . . . , n of initial indicators. 30 . Generate sets L 1 , . . . , L m , . . . , N1 , . . . , Nl , l < m < n of composite indicators, which aggregate the attributes{ K 1 , . . . , K n}. g 40 . Generate ordinal scales Y j = y 1j , . . . , y j j , j = 1, . . . , m, . . . , Z k = } { f z k1 , . . . , z k k , k = 1, . . . , l of composite indicators using various ways for construction, including reduction of attribute scales and aggregation of indicators. 50 . Build several hierarchical schemes of indicator aggregation using various ways to combine attributes and/or combinations of methods to generate composite indicators and their scales at different levels of the hierarchy.

5.6 Demonstrative Example: Method SOCRATES Using the SOCRATES method, let us decrease a dimensionality of the attribute space given by eight indicators (marks in the studied subjects): K 1 Mathematics, K 2 Physics, K 3 Chemistry, K 4 Biology, K 5 Geography, K 6 History, K 7 Literature, K 8 Foreign K i , i = 1, . . . , 8 has its own five-point rating scale } { language. Each indicator X i = xi1 , xi2 , xi3 , xi4 , xi5 , where x i 1 is 1/very bad, xi2 is 2/bad, xi3 is 3/satisfactory,

134

5 Reducing Dimensionality of Attribute Space

xi4 is 4/good, x i 5 is 5/excellent. Estimate grades are ordered by preference as xi5 . xi4 . xi3 . xi2 . xi1 . The subject importance is the same wl = 1, semi-annual marks are equivalent c = c = 1. Represent the object Op (annual marks of the pupil Op in the subjects), p = 1, . . . , 10 as a multiset (5.7) ( ) ( ) ( ) ( ) } { A p = k A p x11 ◦ x11 , . . . , k A p x15 ◦ x15 ; . . . ; k A p x81 ◦ x81 , . . . , k A p x85 ◦ x85 (5.18) over the set X = X 1 ∪ · · · ∪ X 8 of gradations of the original scales of attributes K 1 , . . . , K 8 , considering Ap as of semi-annual marks: A p = ( aei )sum of multisets ei

x + A . Multiplicities k of elements x ∈ X i , ei = 1, . . . , 5 of multisets A A p p p i i Ap (5.18) are rows of the matrix H (Table 3.5) and show how many marks upon the attribute K i the object Op have. For example, the object O1 corresponds to the multiset of marks { A1 = 0 ◦ x11 , 0 ◦ x12 , 0 ◦ x13 , 1 ◦ x14 , 1 ◦ x15 ; 0 ◦ x21 , 0 ◦ x22 , 0 ◦ x23 , 0 ◦ x24 , 2 ◦ x25 ; 0 ◦ x31 , 0 ◦ x32 , 0 ◦ x33 , 1 ◦ x34 , 1 ◦ x35 ; 0 ◦ x41 , 0 ◦ x42 , 0 ◦ x43 , 0 ◦ x44 , 2 ◦ x45 ; 0 ◦ x51 , 0 ◦ x52 , 0 ◦ x53 , 2 ◦ x54 , 0 ◦ x55 ; 0 ◦ x61 , 0 ◦ x62 , 0 ◦ x63 , 1 ◦ x64 , 1 ◦ x65 ; } 0 ◦ x71 , 0 ◦ x72 , 0 ◦ x73 , 2 ◦ x74 , 0 ◦ x75 ; 0 ◦ x81 , 0 ◦ x82 , 0 ◦ x83 , 0 ◦ x84 , 2 ◦ x85 . We can see from this recording form that, for two semesters, the pupil O1 received one mark xi4 ‘4/good’ and one mark xi5 ‘5/excellent’ in mathematics, chemistry, history, two marks xi5 ‘5/excellent’ in physics, biology, geography, literature, foreign language. The dimensionality of the attribute space is equal to |X 1 ∪ · · · ∪ X 8 | = 5·8 = 40. The total number of possible estimates of any pupil in all subjects (representations of an (object by elements of multisets) is equal to the cardinality card A p = ∑ ei ) x = 16 of multiset Ap . Multisets and objects remain incomparable, in k e A p x ∈X i general. But it becomes easier to work with them. { } Pass from the five-point attribute scales X i = x{i1 , xi2 , xi3 , x}i4 , xi5 of attributes K 1 , . . . , K 8 to the shortened three-point scales Q i = qi0 , qi1 , qi2 . Here qi 0 is 0/high mark, which includes the marks xi5 − 5/ excellent and xi4 − 4/ good; qi1 is 1/middle mark, which corresponds to the mark xi3 − 3/ satisfactory; qi2 is 2/low mark, which includes the marks xi2 − 2/ bad and xi1 − 1/ very bad. If the original estimates were ordered, for example, as xi5 . xi4 . xi3 . xi2 . xi1 , then the new estimates will also be ordered: qi0 . qi1 . qi2 . In the reduced attribute space, the object Op will correspond to a multiset ( ) ( ) { ( ) B p = k Bp q10 ◦ q10 , k Bp q11 ◦ q11 , k Bp q12 ◦ q12 , . . . ; ( ) ( ) ( ) } k Bp q80 ◦ q80 , k Bp q81 ◦ q81 , k Bp q82 ◦q82

(5.19)

5.6 Demonstrative Example: Method SOCRATES

135

over the set Q 1 ∪ · · · ∪ Q 8 (of gradations of the shortened scales of attributes ) K 1 , . . . , K 8 . Multiplicities k Bp qioi of elements of multisets Bp (5.19) form rows of the matrix H0 (Table 5.4), which is the reduced matrix H (Table 3.5). Multiplicities are determined by rules (5.11): ( ) ( ) ( ) ( ) ( ) k Bp qi0 = k A p xi5 + k A p xi4 , k Bp qi1 = k A p xi3 , ( ) ( ) ( ) k Bp qi2 = k A p xi2 + k A p xi1 . In particular, the object O1 is specified by a multiset. { B 1 = 2 ◦ q10 , 0 ◦ q11 , 0 ◦ q12 ; 2 ◦ q20 , 0 ◦ q21 , 0 ◦ q22 ; 2 ◦ q30 , 0 ◦ q31 , 0 ◦ q32 ; 2 ◦ q40 , 0 ◦ q41 , 0 ◦ q42 ; 2 ◦ q50 , 0 ◦ q51 , 0 ◦ q52 ; 22 q60 , 0 ◦ q61 , 0 ◦ q62 ; } 2 ◦ q70 , 0 ◦ q71 , 0 ◦ q72 ; 2 ◦ q80 , 0◦q81 , 0◦q82 .

Table 5.4 Pupil marks in the studied subjects K 1 —K 8 as multisets of attributes (shortened scales) H0

q1 0

q1 1

q1 2

q2 0

q2 1

q2 2

q1 0

q3 1

q3 2

q4 0

q4 1

q4 2

B1

2

0

0

2

0

0

2

0

0

2

0

0

B2

1

1

0

0

0

2

0

0

2

0

0

2

B3

0

0

2

0

0

2

0

2

0

0

0

2

B4

2

0

0

1

1

0

0

1

1

2

0

0

B5

2

0

0

2

0

0

1

1

0

2

0

0

B6

2

0

0

2

0

0

2

0

0

2

0

0

B7

1

1

0

0

0

2

0

0

2

1

1

0

B8

2

0

0

2

0

0

2

0

0

0

1

1

B9

1

1

0

0

1

1

0

1

1

0

0

2

B10

1

1

0

2

0

0

1

1

0

2

0

0

H0

q5 0

q5 1

q5 2

q6 0

q6 1

q6 2

q7 0

q7 1

q7 2

q8 0

q8 1

q8 2

B1

2

0

0

2

0

0

2

0

0

2

0

0

B2

1

1

0

0

1

1

0

1

1

0

0

2

B3

2

0

0

0

0

2

0

0

2

1

1

0

B4

2

0

0

2

0

0

1

1

0

2

0

0

B5

2

0

0

2

0

0

2

0

0

2

0

0

B6

2

0

0

2

0

0

2

0

0

2

0

0

B7

0

1

1

1

1

0

0

0

2

0

1

1

B8

1

1

0

2

0

0

2

0

0

1

1

0

B9

0

1

1

0

2

0

0

1

1

0

2

0

B10

0

1

1

2

0

0

1

0

1

2

0

0

136

5 Reducing Dimensionality of Attribute Space

This recording form shows that, for two semesters, the pupil O1 received two high marks (‘5/excellent’ and ‘4/good’) in each subject: mathematics, physics, chemistry, biology, geography, history, literature, a foreign language. The dimensionality of the attribute space is equal to |Q 1 ∪ · · · ∪ Q 8 | = 3·8 = 24. The total number of possible ∑ estimates( of )any pupil in all subjects is expressed by the cardinality card B p = q 0 ∈Q k B p qioi = 16 of multiset Bp (5.19). When the attribute scales are reduced, a dimensionality of the transformed space decreases, but the total number of marks in subjects does not change. Multisets and objects remain generally incomparable as before. But operations with them are even more simplified and facilitated. We shall consider a transition from the original scales X i to the shortened scales Qi as the zero aggregation scheme. To represent objects in the reduced attribute spaces, we build other collections of indicators with different schemes for aggregating of characteristics (Fig. 5.5). For simplicity, we assume that a scale of any new attribute has three grades of estimates, as well as the scale Qi . Each gradation of the composite indicator scale includes combinations of the same type gradations on the initial attribute scales. According to the first aggregation scheme (Fig. 5.5a), all initial attributes { } K 1 , . . . , K 8 with scales Q i = qi0 , qi1 , qi2 are combined into composite indicators, which are considered as final. The attributes K 1 Mathematics and K 2 Physics form a composite indicator L 1 = (K 1 , K 2 ) Physical–Mathematical disciplines. The attributes K 3 Chemistry and K 4 Biology form a composite indicator L 2 = (K 3 , K 4 ) Chemical-Biological disciplines. The attributes K 5 Geography and K 6 History form a composite indicator L 3 = (K 5 , K 6 ) Social disciplines. The attributes K 7 Literature and K 8 Foreign language form a composite indicator L 4 = (K 7 , K 8 ) Philological disciplines. The composite indicators L 1 , . . . , L 4 have } { 0 1 2 rating scales Y j = y j , y j , y j , j = 1, 2, 3, 4 with the following verbal grades: y 0j is 0/high mark, including estimates qa1 , qc1 ; y 2j is 1/middle mark, including estimates qa1 , qc1 , y 2j is 2/low mark, including estimates qa2 , qc2 . Here a = 1, c = 2 for j = 1; a = 3, c = 4 for j = 2; a = 5, c = 6 for j = 3;a = 7, c = 8 for j = 4. The object Op is represented by a multiset { ( ) ( ) ( ) C p = k C p y10 ◦ y10 , k C p y11 ◦ y11 , k C p y12 ◦ y12 , . . . ; ( ) ( ) ( ) } k C p y40 ◦ y40 , k C p y41 ◦ y41 , k C p y42 ◦ y42

(5.20)

over the set Y (1 ∪) · · · ∪ Y4 of scale gradations of the indicators L 1 , . . . , L 4 . Multiplicities kC p y oj of elements of multisets C p (5.20) form rows of the matrix H1 (Table 5.5) and are determined by a rule (5.17) for forming scales of the composite indicators L 1 , . . . , L 4 from scales of the attributes K 1 , . . . , K 8 . So, the object O1 is specified by a multiset { C1 = 4 ◦ y10 , 0 ◦ y11 , 0 ◦ y12 ; 42 y20 , 0 ◦ y21 , 0 ◦ y22 ;

} 4 ◦ y30 , 0 ◦ y31 , 0 ◦ y32 ; 4 ◦ y40 , 0 ◦ y41 , 0 ◦ y42 .

5.6 Demonstrative Example: Method SOCRATES

137

Table 5.5 Pupil marks in the studied disciplines L 1 , L 2 , L 3 , L 4 as multisets of attributes (first aggregation scheme) H1

y1 0

y1 1

y1 2

y2 0

y2 1

y2 2

y1 0

y3 1

y3 2

y4 0

y4 1

y4 2

C1

4

0

0

4

0

0

4

0

0

4

0

0

C2

1

1

2

0

0

4

1

2

1

0

1

3

C3

0

0

4

0

2

2

2

0

2

1

1

2

C4

3

1

0

2

1

1

4

0

0

3

1

0

C5

4

0

0

3

1

0

4

0

0

4

0

0

C6

4

0

0

4

0

0

4

0

0

4

0

0

C7

1

1

2

1

1

2

1

2

1

0

1

3

C8

4

0

0

2

1

1

3

1

0

3

1

0

C9

1

2

1

0

1

3

0

3

1

0

3

1

C 10

3

1

0

3

1

0

2

1

1

3

0

1

This shows that, for the year, the pupil O1 received four high marks (‘5/excellent’ and ‘4/good’) in physical–mathematical, chemical-biological, social and philological disciplines. According to the second aggregation scheme (Fig. 5.5b), the first stage is the same as in the first scheme. At the next stage, the indicators L 1 Physical–Mathematical disciplines and L 2 Chemical-Biological disciplines form a composite indicator M1 = (L 1 , L 2 ) Natural disciplines. The indicators L 3 Social disciplines and L 4 Philological disciplines form a composite indicator M2 = (L 3 , L 4 ) Humanitarian disciplines. The composite indicators M 1 , M 2 are considered as final, which have rating scales { } Ur = u r0 , u r1 , u r2 , r = 1, 2 with the following verbal gradations: u r0 is 0/high mark, including estimates yb0 , yd0 ; u r1 is 1/middle mark, including estimates yb1 , yd1 ; u r2 is 2/low mark, including estimates yb 2 , yd 2 . Here b = 1, d = 2 for r = 1; b = 3, d = 4 for r = 2. The object Op is represented by a multiset ( ) ( ) ( ) { D p = k D p u 01 ◦ u 01 , k D p u 11 ◦ u 11 , k D p u 21 ◦ u 21 ( ) ( ) ( )2 } k D p u 02 ◦ u 02 , k D p u 12 ◦ u 12 , k D p u 22 u 22

(5.21)

over( the) set U1 ∪ U2 of scale gradations of the indicators M 1 , M 2 . Multiplicities k Dp u ror of elements of multisets Dp (5.21) form rows of the matrix H2 (Table 5.6) and are determined by a rule (5.17) for forming scales of the composite indicators M 1 , M 2 from scales of the attributes L 1 , . . . , L 4 . So, the object O1 is specified by a multiset } { D1 = 8 ◦ u 01 , 0 ◦ u 11 , 0 ◦ u 21 ; 8 ◦ u 02 , 0◦u 12 , 0 ◦ u 22 . This shows that, for the year, the pupil O1 received eight high marks (‘5/excellent’ and ‘4/good’) in natural and humanitarian disciplines.

138

5 Reducing Dimensionality of Attribute Space

Table 5.6 Pupil marks in the studied disciplines M 1 , M 2 , academic scores N 1 , N 2 as multisets of attributes (second, third and fourth aggregation schemes) H2

u1 0

u1 1

u1 2

u2 0

u2 1

u2 2

H3

z1 0

z1 1

z1 2

H4

z2 0

z2 1

z2 2

D1

8

0

0

8

0

0

E1

16

0

0

F1

16

0

0

D2

1

1

6

1

3

4

E2

2

4

10

F2

2

4

10

D3

0

2

6

3

1

4

E3

3

3

10

F3

3

3

10

D4

5

2

1

7

1

0

E4

12

3

1

F4

12

3

1

D5

7

1

0

8

0

0

E5

15

1

0

F5

15

1

0

D6

8

0

0

8

0

0

E6

16

0

0

F6

16

0

0

D7

2

2

4

1

3

4

E7

3

5

8

F7

3

5

8

D8

6

1

1

6

2

2

E8

12

3

1

F8

12

3

1

D9

1

3

4

0

6

2

E9

1

9

6

F9

1

9

6

D10

6

2

0

5

1

2

E10

11

3

2

F10

11

3

2

According to the third aggregation scheme (Fig. 5.5c), the first and second stages are the same as in the second scheme. At the next stage, the indicators M 1 Natural disciplines and M 2 Humanitarian disciplines form a final integral index} { N1 = (M1 , M2 ) Academic score. The index N 1 has a rating scale Z 1 = z 10 , z 11 , z 12 with the following verbal gradations: z 10 is 0/high mark, including estimates u 01 , u 02 ; z 11 is 1/middle mark, including estimates u 11 , u 12 ; z 12 is 2/low mark, including estimates u1 2 , u2 2 . The object Op is represented by a multiset { ( ) ( ) ( ) } E p = k E p z 10 ◦z 10 , k E p z 11 ◦ z 11 , k E p z 12 ◦z 12

(5.22)

( ) over the set Z 1 of scale gradations of the indicator N 1 . Multiplicities k E p z 1o1 of elements of multisets Ep (5.22) form rows of the matrix H3 (Table 5.6) and are determined by a rule (5.17) for forming a scale of the composite indicator N 1 from scales of the attributes M 1 , M 2 . In particular, the object O1 is specified by a multiset { } E 1 = 16◦z 10 , 0 ◦ z 11 , 0 ◦ z 12 . This shows that, for the year, the pupil O1 received sixteen high marks (‘5/excellent’ and ‘4/good’) in all studied disciplines. According to the fourth aggregation scheme (Fig. 5.5d), the first stage is the same as in the first scheme. At the next stage, the indicators L 1 Physical–Mathematical disciplines, L 2 Chemical-Biological disciplines, L 3 Social disciplines, and L 4 Philological disciplines form together a final integral index N2{ = (L 1 , L}2 , L 3 , L 4 ) Academic score. The index N 2 has a rating scale Z 2 = z 20 , z 21 , z 22 with the following verbal gradations: z2 0 is 0/high mark, including estimates y10 , y20 , y30 , y40 ; z 21 is 1/middle mark, including estimates y11 , y21 , y31 , y41 , z 22 is 2/low mark, including estimates y12 , y22 , y32 , y42 .

References

139

The object Op is represented by a multiset ( ) ( ) } { ( ) F p = k F p z 20 ◦z 20 , k F p z 21 ◦ z 21 , k F p z 22 ◦ z 22

(5.23)

( ) over the set Z 2 of scale gradations of the indicator N 2 . Multiplicities k F p z 2o2 of elements of multisets Fp (5.23) form rows of the matrix H4 (Table 5.6) and are determined by a rule (5.17) for forming a scale of the composite indicator N 2 from scales of the attributes L 1 , L 2 , L 3 , L 4 . In particular, the object O1 is specified by a multiset } { F 1 = 16◦z 20 , 0◦z 21 , 0 ◦ z 22 . This shows that, for the year, the pupil O1 received sixteen high marks (‘5/excellent’ and ‘4/good’) in all studied disciplines. So, with a sequential transition from the initial data to aggregated indicators, a dimensionality of the transformed spaces decreases from 40 to 24, 12, 6, 3. The total number of pupil marks in all studied subjects, that is expressed by the cardinality of multisets Ap (5.7), Bp (5.19), C p (5.20), Dp (5.21), Ep (5.22), Fp (5.23) does not change and remains equal to 16. Five constructed schemes for aggregating indicators can be considered as judgments of five independent experts. In this case, any multicriteria choice task becomes a collective choice task, which is solved in several different reduced spaces of attributes. This ensures a greater validity of the final results. When forming hierarchical aggregation schemes, it is advisable to combine the initial indicators into composite indicators in such way that they have a clear sense, and gradations of their scales consist of a small number of the initial gradations. Let us recall again that various methods for aggregating indicators and their scales can lead to inconsistent results of evaluating the objects under consideration. The proposed methods for reducing dimensionality of attribute space have a certain versatility, since they allow one to operate simultaneously with symbolic (qualitative) and numerical (quantitative) data, presenting each grade of the composite (aggregated) indicator scale as combination of grades of the initial characteristic scales. An attractive feature of the described approaches to reducing the space dimensionality is possibility to use them in combination with various decision making methods and information processing technologies.

References 1. Larichev, O.I.: Nauka i iskusstvo prinyatiya resheniy (Science and art of decision making). Nauka, Moscow (1979).(in Russian) 2. Larichev, O.I.: Verbal’niy analiz resheniy (Verbal decision analysis). Nauka, Moscow (2006).(in Russian) 3. Larichev, O.I., Moshkovich, H.M.: Verbal Decision Analysis for Unstructured Problems. Kluwer Academic Publishers, Boston (1997)

140

5 Reducing Dimensionality of Attribute Space

4. Larichev, O.I., Moshkovich, H.M., Furems, E.M., Mechitov, A.I., Morgoev, V.K.: Knowledge Acquisition for the Construction of Full and Contradiction Free Knowledge Bases. IEC ProGAMMA, Groningen (1991) 5. Ayvazyan, S.A., Bukhstaber, V.M., Enyukov, I.S., Meshalkin L.D.: Prikladnaya statistika. Klassifikatsiya i snizhenie razmernosti (Applied statistics. Classification and dimension reduction). Finansy i statistika, Moscow (1989). (in Russian) 6. Glotov, V.A., Pavelev, V.V.: Vektornaya stratifikatsiya (Vector stratification). Nauka, Moscow (1984).(in Russian) 7. Samet, H.: Foundation of Multidimensional and Metric Data Structures. Elsevier, Boston (2006) 8. Petrovsky, A.B.: Hierarchical aggregation of object attributes in multiple criteria decision making. In: Artificial intelligence: Proceedings of the 16th Russian Conference. Communications in Computer and Information Science, vol. 934, pp. 125–137. Springer Nature Switzerland AG (2018) 9. Petrovsky, A.B.: Techniques for reducing dimensionality of attribute space. J. Phys. Conf. Ser. 1801, Open Access 012017 (2021) 10. Petrovsky, A.B., Roizenzon, G.V. (2008). Interaktivnaya protsedura snizheniya razmernosti priznakovogo prostranstva v zadachakh mnogokriterial’noy klassifikatsii (An interactive procedure for reducing the dimension of attribute space in tasks of multicriteria classification). In: Podderzhka prinyatiya resheniy. Trudy Instituta sistemnogo analiza RAN. (Decision support. Proceedings of the Institute for System Analysis of the Russian Academy of Sciences), vol. 35, pp. 48–60. LKI Publishing House, Moscow (in Russian) 11. Petrovsky, A.B., Roizenzon, G.V.: Mnogokriterial’niy vybor s umen’sheniem razmernosti prostranstva priznakov: mnogoetapnaya tekhnologiya PAKS (Multiple criteria choice with reducing dimension of attribute space: multi-stage technology PAKS). Iskusstvenniy intellekt i prinyatie resheniy (Artificial intelligence and decision making) 4, 88–103 (2012). (in Russian) 12. Petrovsky, A.B., Royzenson, G.V.: Multi-stage technique ‘PAKS’ for multiple criteria decision aiding. Int. J. Inf. Technol. Decis. Mak. 12(5), 1055–1071 (2013) 13. Petrovsky, A.B.: Teoriya Prinyatiya Resheniy (Theory of decision making). Publishing Center “Academy”, Miscow (in Russian) (2009) 14. Petrovsky, A.B.: Gruppovoy verbal’niy analiz resheniy (Group verbal decision analysis). Nauka, Moscow (2019).(in Russian) 15. Petrovsky, A.B.: Method for shortening dimensionality of qualitative attribute space. In: Russian Adv. Artifi. Intell. 2020 CEUR Workshop Proc. 2648, 1–12 (2020) 16. Petrovsky, A.B.: Reduction of attribute space dimensionality: the SOCRATES method. Sci. Tech. Inf. Process. 48(5), 342–355 (2021)

Chapter 6

Multicriteria Choice in Attribute Space of High Dimensionality

This chapter describes new multistage and multimethod technologies for multicriteria choice in a high-dimensional attribute space. These technologies provides sequential aggregating a big set of numerical, symbolic or verbal attributes of objects into a small number of final composite indicators or a single integral quality indicator. Technologies allow us to solve all types of tasks of individual and collective choice of objects, given by many quantitative and/or qualitative attributes. We present demonstrative examples of solving model tasks of group choice.

6.1 Progressive Aggregation of Classified States: Technology PAKS Making strategic and unique decisions, in which there are very few compared objects (options, alternatives), and a number of characteristics of object features is large and can reach tens and hundreds, are among the most difficult tasks. Examples of such tasks are choosing a place of airport or power plant, route of gas or oil pipeline, scheme of transport network, configuration of complex technical system, and the like. Remind that a task of individual or group multicriteria choice is formulated in the most general form as follows. There is a collection of objects O1 , . . . , Om evaluated by one or several experts/decision makers upon many attributes K 1 , . . . , K n with numerical and/or verbal rating scales, gradations of which are ordered in some cases. Based on expert knowledge and/or DMs’ preferences, it is required: (1) to select one or more of the best objects; (2) to arrange all objects; (3) to distribute objects by classes D1 , . . . , Du . The known decision making methods are poorly suitable to solve multicriteria choice tasks of high dimensionality because they require significant labor and time to obtain and process large amounts of data about objects, DMs’ preferences and/or expert knowledge. In addition, solving such problems, persons often use various © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. B. Petrovsky, Group Verbal Decision Analysis, Studies in Systems, Decision and Control 451, https://doi.org/10.1007/978-3-031-16941-0_6

141

142

6 Multicriteria Choice in Attribute Space of High Dimensionality

simplifying strategies, remove some of attributes. These actions negatively affects the results—decision rules for selection, rankings of objects, class boundaries—and makes it difficult to analyze and explain the results obtained. The multistage technology for progressive aggregation of classified states (PAKS) is intended to solve various problems of individual and collective multicriteria choice in high-dimensional attribute space based on decision makers preferences and experts knowledge [1–3]. The PAKS technology provides a hierarchical granulation of information by reducing a dimensionality of initial attribute space and sequential aggregating big sets of numerical, symbolic or verbal attributes of objects into a single integral quality indicator or a small number of final composite indicators with verbal scales, and then a solution of the considered choice problem in a reduced attribute space. The presentation of the initial object characteristics in a compact form reduces the complexity and time of solving the choice task, allows one to explain meaningfully the results obtained. A procedure for solving the multicriteria choice task using the PAKS technology includes the following stages. First, we gradually reduce a dimensionality of original attribute space and construct a hierarchical scheme of indicators. We form scales of all composite indicators using various methods of verbal decision analysis. Construction of composite indicator scale is considered as a classification procedure, where the classified objects are combinations of grades of initial attribute scales, and the classes are grades of composite indicator scale. Further, in the resulting space of a lower dimension, the given task of ordering or classification of multi-attribute objects is solved by some method of decision making. Let us notice some peculiarities of the PAKS technology application. Aggregation of attribute is based on decision maker preferences and expert knowledge. DM/expert generates a set of initial characteristics of the options under consideration. Depending on the task features, these characteristics are specified in advance or appear when analyzing the task. For each initial indicator, a scale is built that can be usually used in practice, or specially designed. Scales of initial attributes can have numerical (point, interval) or verbal grades of estimates. It is advisable to introduce such gradations that are inherent in available options O1 , . . . , Oq . In this way, a dimension of original attribute space can be reduced in advance. If a collection of real objects is not initially specified, operations are performed on the set of all possible tuples of estimates in an attribute space formed by the Cartesian product of grades of indicator scales. When building a hierarchical scheme of indicators, a decision maker/expert, based on his/her experience and intuition, determines a number, composition and content of indicators at each level of the hierarchy. As an indicator, he/she can select one of the original characteristics or several combined characteristics. Indicators are aggregated sequentially. Groups of indicators are combined step by step into new groups of the next hierarchical level, up to a single integral indicator of the last level, if it is necessary. At each stage of the procedure, DM/expert specifies, which initial indicators are considered independent, and which are included in some composite indicator.

6.1 Progressive Aggregation of Classified States: Technology PAKS

143

DM/expert also determines the semantic content of indicators and grades of rating scales. Indicators should have such scales that, on the one hand, reflect the aggregated features of objects, and, on the other hand, are understandable to DM/expert in the final ordering or classification of objects. We recommend to build scales of indicators with a small (three-five) number of verbal grades. Limitation of grade number on indicator scales is, generally, caused by a growth of grade number when estimate combinations are combined into a composite indicator scale. In turn, this makes aggregation of indicators more cumbersome and complicates explanation of the results obtained. At different stages of aggregation, we propose to use various methods for constructing scales of composite indicators, for example, methods of tuple stratification, ZAPROS, ORCLASS of verbal analysis, or other methods. This reduces an impact of the method for forming indicator scales on the final results. Unification of procedures for forming the composite indicator scales simplifies explanation and facilitates understanding of the results obtained. In practical tasks of multicriteria choice, situations are possible when a hierarchical scheme of indicators is fully known (for example, the organizational structure of enterprise), is partially known (for example, only the structure of device technical characteristics), or is unknown and must be constructed (for example, characteristics of scientific research and its results). In the first case, we should pay the main attention to development of composite indicator scales. In the second and third cases, we recommend to form various collections of composite indicators in different ways, for example, to combine indicators into groups based on semantic commonality. This helps a decision maker to compare the results obtained for several different collections of composite indicators, to evaluate a quality of the formed options for the original task solution. Usage of many methods for constructing of composite indicator scales transforms the original choice of objects represented by many non-numerical attributes into a collective choice task. It is expedient to solve such task by methods of group verbal decision analysis where objects are specified as multisets. So, for sorting of objects described by many quantitative and/or qualitative attributes, one can apply the ARAMIS method, which allows to order objects without building individual rankings of objects. The nominal or ordinal classification of multi-attribute objects with contradictory descriptions is carried out using the CLAVA-HI, CLAVA-NI, MASKA methods. The block diagram for solving a multicriteria choice task with the multistage PAKS technology, in which the HISCRA method is used for reducing a dimensionality of attribute space, consists of the following steps (Fig. 6.1). Step 1. Choose the type of a task to be solved: select the best option T 1 ; order options T 2 ; distribute options into classes (ordered or not) T 3 . Step 2. Generate a set O1 , . . . , Oq , q ≥ 2 of options for solving the task T. Step 3. Generate a set K 1 , . . . , K n , n ≥ 2{of initial indicators (attributes). } Step 4. Generate ordinal scales X i = xi1 , . . . , xih i , i = 1, . . . , n of initial indicators.

144

6 Multicriteria Choice in Attribute Space of High Dimensionality

Begin Choose the type of task T Generate a set O1,…,Oq of options Generate a set K1,…,Kn of initial indicators Generate ordinal scales Xi = {xi1,…,xihi}, i = 1,…,n of initial indicators No

Change the scale of composite indicator

Generate a set L1,...,Lm of composite indicators Yes Generate ordinal scales Yj = {yj1,…,yjgj}, j = 1,…,m of composite indicators

No

Change the scheme for indicator aggregation

Yes

Build a hierarchical scheme for indicator aggregation Solve the task T with one of the multicriteria choice methods

No

The result of solving the task Т satisfies DM

Yes End Fig. 6.1 Block diagram of the multistage PAKS technology

6.2 Demonstrative Example: Technology PAKS

145

Step 5. Generate sets L 1 , . . . , L m , . . . , N1 , . . . , Nl , l < m < n of composite K 1 , . .}. , K n . indicators, which aggregate the initial indicators { g 1 Step 6. Generate ordinal scales Y j = y j , . . . , y j j , j = 1, . . . , m, . . . , Z k = } { f z k1 , . . . , z k k , k = 1, . . . , l of composite indicators using different methods for attribute aggregation, for example, stratification of tuples W 1 ; multicriteria classification of tuples W 2 ; ranking of tuples W 3 . Step 7. Build a hierarchical scheme for indicator aggregation, specifying which characteristics are combined into the intermediate indicators, and which ones are considered the final indicators. Step 8. Solve the task T using one of the multicriteria choice methods. If the result obtained satisfies a decision maker/expert, then the algorithm terminates. Otherwise, save the result and go to step 9. Step 9. If the result obtained does not satisfy a decision maker/expert, then either change the scheme for indicator aggregation and build a new hierarchical scheme } { gj 1 of composite indicators (go to step 7), or change a scale Y j = y j , . . . , y j , j = 1, . . . , m of one or more composite indicators (go to step 6), or generate a new set L 1 , . . . , L m of composite indicators (go to step 5). Objects can be represented not only by vectors/tuples of attribute values, but also by multisets of estimate grades on indicator scales. In the attribute space K 1 , . . . , K n , the object Op , p = 1, . . . , q is described as a multiset (3.1) ( ) ( ) } { A p = k A p x 1 ◦ x 1, . . . , k A p x h ◦ x h } { over the set X = x 1 , . . . , x h of estimates or a multiset (3.2) ( ) ( ) {k A p x11 ◦ x11 , . . . , k A p x1h ◦ x1h 1 , . . . ; Ap = ( ) ( ) k A p xn1 ◦ xn1 , . . . , k A p xnh n ◦ xnh n } over the extended set X 1 ∪ (. . . ∪) X n = {x11 , . . . x1h 1 ; . . . xn1 , . . . xnh n } of estimates. The multiplicities k A p (x e ), k A p xiei show how many times the estimates x e ∈ X, xiei ∈ X i are present in description of the object O p . With a single integral { indicator N}k , the f

object O p corresponds to one of grades of the estimate scale Z k = z k1 , . . . , z k , k = 1, . . . , l.

6.2 Demonstrative Example: Technology PAKS Using the PAKS technology, let us rank ten objects (pupils) O1 , . . . , O10 in attribute spaces of different dimensionality. Objects are evaluated upon eight qualitative criteria (the studied subjects): K 1 Mathematics, K 2 Physics, K 3 Chemistry, K 4 Biology, K 5 Geography, K 6 History, K 7 Literature, K 8 Foreign language. All criteria

146

6 Multicriteria Choice in Attribute Space of High Dimensionality

Table 6.1 Pupil marks in the studied subjects K 1 − K 8 and disciplines K 1 , K 2 , M3 , M4 as multisets of attributes G1

x1 x2 x3 x4 x5

lp

G2

y1 y2 y 3 y4 y5

lp

A1

0

0

0

4

4

0.333

G1

0 0 0 2 2

0.333

A2

2

4

1

1

0

0.571

G2

1 2 0 1 0

0.571

A3

5

0

1

2

0

0.727

G3

2 1 1 0 0

0.667

A4

0

1

1

3

3

0.385

G4

0 0 2 0 2

0.333

A5

0

0

0

7

1

0.467

G5

0 0 0 4 0

0.500

A6

0

0

0

4

4

0.333

G6

0 0 0 1 3

0.200

A7

2

2

3

1

0

0.571

G7

1 1 1 1 0

0.571

A8

0

1

2

3

2

0.429

G8

0 0 1 2 1

0.286

A9

1

3

4

0

0

0.533

G9

0 3 1 0 0

0.500

A10

0

0

1

2

5

0.273

G10

0 0 0 1 3

0.200

} { K 1 , . . . , K 8 have the same five-point rating scale X = x 1 , x 2 , x 3 , x 4 , x 5 , where x 1 is 1/very bad, x 2 is 2/bad, x 3 is 3/satisfactory, x 4 is 4/good, x 5 is 5/excellent. Estimate grades are ordered by preference as x 5 . x 4 . x 3 . x 2 . x 1 . An importance of criteria is the same wl = 1. We shall use the HISCRA method to reduce a dimensionality of attribute spaces, and the ARAMIS method to rank objects by their features. In the original space of initial(attributes K 1 ), . . . , K 8 , the object Op , p = 1, . . . , 10 is given by a vector/tuple x p = x p1 , . . . , x p8 of estimates. Represent the object O p by a multiset (3.1) ( ) ( ) ( ) ( ) ( ) } { A p = k A p x 1 ◦ x 1, k A p x 2 ◦ x 2, k A p x 3 ◦ x 3, k A p x 4 ◦ x 4, k A p x 5 ◦ x 5 } { over the set X = x 1 , x 2 , x 3 , x 4 , x 5 of scale grades. Objects and their attributes are shown{in the matrix G1 (Table 6.1). Thus, the } object O1 is described by the multiset A1 = 0 ◦ x 1 , 0 ◦ x 2 , 0 ◦ x 3 , 4 ◦ x 4 , 4 ◦ x 5 . In the reduced space of final indicators K 1 , K 2 , M3 , M4 , where M3 = (K 3 , K 4 , K 5 ) Natural disciplines, and M4 = (K 6 , K 7 , K 8 ) Humanitarian disciplines, define the object Op by a multiset ( ) ( ) ( ) ( ) ( ) } { G p = kG p y1 ◦ y1, kG p y2 ◦ y2, kG p y3 ◦ y3, kG p y4 ◦ y4, kG p y5 ◦ y5 } { over the set Y = y 1 , y 2 , y 3 , y 4 , y 5 of new scale grades, where ye is a grade x e or ue , e = 1, . . . , 5. Multiplicities k G p (y e ) are given{ in the matrix G2 (Table 6.1). Thus, the } object O1 corresponds to the multiset G 1 = 0 ◦ y 1 , 0 ◦ y 2 , 0 ◦ y 3 , 2 ◦ y 4 , 2 ◦ y 5 . In the original space, the best O+ and the worst O− objects are specified by multisets of estimates } { A+ = 0 ◦ x 1 , 0 ◦ x 2 , 0 ◦ x 3 , 0 ◦ x 4 , 8 ◦ x 5 ,

6.2 Demonstrative Example: Technology PAKS

147

} { A− = 8 ◦ x 1 , 0 ◦ x 2 , 0 ◦ x 3 , 0 ◦ x 4 , 0 ◦ x 5 ; in the reduced space,—by multisets of estimates } { G+ = 0 ◦ y1, 0 ◦ y2, 0 ◦ y3, 0 ◦ y4, 4 ◦ y5 , } { G− = 4 ◦ y1, 0 ◦ y2, 0 ◦ y3, 0 ◦ y4, 0 ◦ y5 . ( ) The corresponding values of proximity indicator l p = l O p (3.15) of objects Op to the best object O+ are shown in Table 6.1. The rankings R A1 of objects in the original space, and R A2 in the reduced space are as follows:

R A1 ⇔ [O10 ] . [O1 , O6 . O4 ] . [O8 . O5 ] . [O9 . O2 , O7 ] . [O3 ], l p1 · 10−3 273 383 385 429 467 533 571 727. (6.1) R A2 ⇔ [O6 , O10 ] . [O8 . O1 , O4 ] . [O5 , O9 ] . [O2 , O7 ] . [O3 ]. l p2 · 10−3 200 286 333 500 571 667.

(6.2)

Below the rankings R A1 and R A2 , proximity indicators l p are given for the object O p . Distant groups of objects, which can be considered as classes or clusters, are enclosed in square brackets. Note that the rankings R A1 and R A2 differ from each other. In the first case, the pupil O10 has the excellent academic score, pupils O1 , O6 , O4 have the good score, pupils O8 , O5 have the satisfactory score, pupils O9 , O2 , O7 have the bad score, pupil O3 has the very poor score. In the second case, pupils O6 , O10 have the excellent academic score, pupils O8 , O1 , O4 have the good score, pupils O5 , O9 have the satisfactory score, pupils O2 , O7 have the bad score, pupil O3 has the very poor score. These object groups also do not coincide with the groups generated according to grades on the scale of integral indicator N 0 of academic score (Table 5.1). The pupils O1 , O6 , O10 have the excellent score, pupils O4 , O5 , O8 have the good score, the pupil O7 has the satisfactory score, pupils O2 , O3 , O9 have the bad score. This circumstance prompted the idea that the choice task should be solved not by one, but by several different methods, and then the results obtained should be analyzed. Such approach was implemented in the multimethod PAKS-M technology described below. Efficiency of the PAKS technology for solving multicriteria choice tasks can be evaluated in different ways depending on the type of task T. For example, in ranking tasks, we assess efficiency as a ratio of numbers of incomparable alternatives before and after reducing the dimensionality of attribute space. In classification tasks, we evaluate efficiency by a number of calls to a decision maker/expert that are required to build a complete consistent classification. Accordingly, it is possible to compare a number of calls to the person when solving a classification task in the original and

148

6 Multicriteria Choice in Attribute Space of High Dimensionality

new attribute space. However, for classification tasks of high dimensionality, this approach is not always acceptable, since, in some cases, it is simply not possible to construct a complete consistent classification in the original attribute space. This is due to growth of the attribute space dimensionality when both numbers of objects classified and complexity of object description increase. The PAKS technology gives opportunities for a decision maker/expert to systematize available information, reduce time and labor intensity of solving the task of multicriteria choice, analyze and justify the results obtained.

6.3 Progressive Aggregation of Classified Situations with Many Methods: Technology PAKS-M The multimethod technology for progressive aggregation of classified situations with many methods (PAKS-M), as well as the multistage technology PAKS, is intended to solve various problems of individual and collective multicriteria choice in high-dimensional attribute space based on decision makers preferences and experts knowledge [1, 4–7]. The PAKS-M technology provides reduction in dimensionality of initial attribute space; construction of composite indicators and/or an integral quality indicator that aggregate the initial numerical, symbolic or verbal attributes of objects; classification or ordering of the considered multi-attribute objects in a reduced attribute space, using different decision making methods. In the PAKS-M technology, in addition to the PAKS technology, we build several hierarchical aggregation schemes with different sets of final indicators, which are then used to solve the given task of multicriteria choice [8]. Final indicators convey the meaning of initial characteristics in compact form and make it possible to substantiate a choice of the most preferable option. Decreasing a number of indicators allows not only to solve the choice problem, but also facilitates analysis and explanation of the results obtained. Even in the case when the choice task is solving by a single person, then he/she can consider the task solution from different points of view, applying under the choice various approaches that are dictated by his/her life experience or conditioned by his/her understanding the task. Thus, a DM/expert can obtain several results of solving the task, differing from each other, but at the same time being significant for him/her and therefore necessary for the final choice. Each hierarchical tree of indicators represents the point of view of some new independent decision maker/expert, reflecting his/her personal knowledge, judgments and values. In presence of several aggregation trees, objects are simultaneously evaluated by a group of new experts, and the task being solved becomes a collective choice task. Aggregated group preference is revealed using several methods of group multicriteria choice and voting procedures. This excludes possibility of dictating one of the methods. Application of methods of group verbal analysis for solving the task

6.3 Progressive Aggregation of Classified Situations with Many Methods: …

149

of collective choice ensures that we take into account various interests of many participants, diversity and non-coincidence of their goals, ways of expressing their preferences. Collation of the results of solving a choice task, obtained by several methods for different indicators, allows one to analyze the final results, compare the systems of indicators with each other, choose the most preferable aggregation scheme, and evaluate quality of the choice made. Solving a multicriteria choice task by several decision makers/experts takes into account different, including contradictory points of view of participants, without requiring a coincidence of their opinions. Solving a task in several ways reduces an influence of peculiarities of any method on the results. Increasing a number of persons, which take part in the choice, and a number of methods, by which the task is solving, helps to increase a validity of the results. As a rule, the results obtained by different methods based on collective preference, in comparison with the results obtained by one method based on unipolar judgments, cause more confidence of persons interested in solving the problem. Let us consider the main stages of solving a multicriteria choice task using the multimethod PAKS-M technology. Initially, with participation of decision maker and/or expert, a set of original characteristics of the analyzed objects is generated. When setting the choice task, it is desirable to include into original attributes such indicators that reflect the basic features of objects. Further, based on DM preferences and/or expert knowledge, using the HISCRA-M or SOCRATES method, several hierarchical indicator schemes for reducing attribute space dimensionality are built. Aggregation of indicators is usually a multistage procedure. The last level of each aggregation tree is determined by the content of the considered practical task. This can be, if necessary, several final indicators or a single integral indicator. Then scales of all composite indicators are generated. It is possible to construct each grade of a composite indicator scale in different ways, using different methods to construct a composite indicator scale itself, choosing different numbers of grades and variation ranges of variables in scale grades for indicators at any hierarchical level. While building a tree of indicator aggregation and constructing estimation scales for composite indicators, DM/expert constantly balances between reducing time costs and simplifying the choice, on the one hand, and a more strict distinction of choice objects, on the other hand. Preferences can be given to certain factors depending on specifics of the choice task being solved, knowledge and experience of a person. At the final stage, in the constructed attribute space of a lower dimensionality, the given task of multicriteria choice is solved, using several methods for a greater validity of the results. Each method for solving the task can be considered as a new independent decision maker/expert. The results obtained by many different methods are analyzed and, if required, they are aggregated again, using another method of group decision making. After that, a DM/expert makes the well-founded final choice. Thus, a solution is found not for a single task of collective choice, but for a whole collection of such tasks.

150

6 Multicriteria Choice in Attribute Space of High Dimensionality

The block diagram for solving a multicriteria choice task with the multimethod PAKS-M technology consists of the following steps (Fig. 6.2). Step 1. Choose the type of a task to be solved: select the best option T 1 ; order options T 2 ; distribute options into classes (ordered or not) T 3 . Step 2. Generate a set O1 , . . . , Oq , q ≥ 2 of options for solving the task T. Step 3. Generate a set K 1 , . . . , K n , n ≥ 2{of initial indicators (attributes). } hi 1 Step 4. Generate ordinal scales X i = xi , . . . , xi , i = 1, . . . , n of initial indicators. Step 5. Generate sets L 1 , . . . , L m , . . . , N1 , . . . , Nl , l < m < n of composite K 1 , . .}. , K n . indicators, which aggregate the initial indicators { g 1 Step 6. Generate ordinal scales Y j = y j , . . . , y j j , j = 1, . . . , m, . . . , Z k = } { f z k1 , . . . , z k k , k = 1, . . . , l of composite indicators using different methods for attribute aggregation, for example, stratification of tuples W 1 ; multicriteria classification of tuples W 2 ; ranking of tuples W 3 . Step 7. Build several hierarchical schemes for indicator aggregation using various ways to combine attributes and/or combinations of methods, to generate composite indicators and their scales at different levels of the hierarchy, and to reduce a dimentionality of attribute space with HISCRA-M or SOCRATES methods. Step 8. Solve the task T using one of the multicriteria choice methods. If the result obtained satisfies a decision maker/expert, then save the result and go to step 9. Otherwise, either change the method for solving the task T (go to step 8), change the scheme for indicator aggregation and build a new hierarchical scheme of composite } { gj 1 indicators (go to step 7), or change a scale Y j = y j , . . . , y j , j = 1, . . . , m of one or more composite indicators (go to step 6), or generate a new } { set of composite indicators L 1 , . . . , L m (go to step 5), or change a scale X i = xi1 , . . . , xih i , i = 1, . . . , n of one or more initial indicators K 1 , . . . , K n (go to step 4). Step 9. Solve the task T using several different methods of multicriteria choice. If the results obtained satisfy a DM/expert, then save the results and go to step 10. If the results obtained do not satisfy a DM/expert, then go to step 8. Step 10. Find the final solution to the task T using one of the group choice methods, analyze and justify the solution. If the final solution of the task T satisfies a DM/expert, then the algorithm terminates. Otherwise, go to step 3, and the task T is solved again. Objects can be represented both by vectors/tuples of attribute values, and by multisets of estimate grades on indicator scales. In the space of initial are defined, respecattributes K 1 , . . . , K n , the object Op and versions O ( 1) ( ph ) } { 1 tively, by multisets A p = k A p x ◦ x , . . . , k A p x ◦ x h and A = p } { } { 1 ( 1 ) ( h ) 1 h h k A p x ◦ x , . . . , k A p x ◦ x , s = 1, . . . , t over the set X = x , . . . , x } { . . or over the extended set X 1 . . . X n = x11 , . . . , x1h 1 , . . . ; xn1 , . . . , xnh n of estimates. In the space of attribute Q 1 , . . . , Q n with{reduced the object ( 1 ) scales, ( dO ) p and } 1 q ◦ q q ◦ qd are associated with multisets B = k , . . . , k versions O p Bp Bp p

6.3 Progressive Aggregation of Classified Situations with Many Methods: … Begin Choose the type of task T Generate a set O1,…,Oq of options Generate a set K1,…,Kn of initial indicators No

Change a set of composite indicators

Generate ordinal scales Xi = {xi1,…,xihi}, i = 1,…,n of initial indicators Yes Generate a set L1,...,Lm of composite indicators

No Change the scale of composite indicator

Generate ordinal scales Yj = {yj1,…,yjgj}, j = 1,…,m of composite indicators

Yes

No Change the scheme for indicator Yes aggregation

No

Build several hierarchical schemes for indicator aggregation Solve the task T with one of the multicriteria choice methods

No

Change the method for solving Yes the task T

The result of solving the task Т satisfies DM Yes Solve the task T with several multicriteria choice methods

No

The result of solving the task Т satisfies DM Yes

Find the final solution to the task T with one of the group choice methods

The result of solving the task Т satisfies DM Yes End

Fig. 6.2 Block diagram of the multimethod PAKS-M technology

No

151

152

6 Multicriteria Choice in Attribute Space of High Dimensionality

} } { ( ) ( ) k B p q 1 ◦ q 1 , . . . , k B p q d ◦ q d over the set Q = q 1 , . . . , q d } { or Q 1 ∪ . . . ∪ Q n = q11 , . . . , q1d1 ; . . . ; qn1 , . . . , qndn of estimates. In the and B p =

{

space of aggregated attributes N1 , . . . , Nl , the object Op and versions O p are ( 1) 1 ( f) } { f described as multisets G p = k G p z ◦ z , . . . , k G p z ◦ z and G = p } { ( ) ( ) } {

1 1 f f 1 f kG p p z ◦ z , . . . , kG p z ◦ z over the set Z = z , . . . , z or Z 1 ∪ } { . f f . . . Zk = z 11 , . . . , z 1 1 ; . . . ; z k1 , . . . , z k k of estimates. The multiplicities k A p (x e ), k B p (q e ), k G p (z e ) show how many times the estimates x e ∈ X, xiei ∈ X i , q e ∈ Q, qiei ∈ Q i , z e ∈ Z , z kek ∈ Z k are present in description of the object Op or version O O corresponds p . With a single integral indicator N or N k , the object } { p } { 1 f to one of grades of the estimate scale Z = z , . . . , z f or Z k = z k1 , . . . , z k , k = 1, . . . , l.

6.4 Demonstrative Example: Technology PAKS-M Using the PAKS-M technology, let us rank ten objects (pupils) O1 , . . . , O10 in various reduced attribute spaces. Objects are evaluated by two experts (per two semesters) upon eight qualitative criteria (the studied subjects): K 1 Mathematics, K 2 Physics, K 3 Chemistry, K 4 Biology, K 5 Geography, K 6 History, K 7 Literature, K 8 Foreign language. indicator K i , i = 1, . . . , 8 has its own five-point rating scale X i = } { 1 2 3Each xi , xi , xi , xi4 , xi5 , where xi1 is 1/very bad, xi2 is 2/bad, xi3 is 3/satisfactory, xi4 is 4/good, xi5 is 5/excellent. Estimate grades are ordered by preference as xi5 . xi4 . xi3 . xi2 . xi1 . An importance of subject is the same wl = 1, and semi-annual marks are equivalent c = c = 1. To reduce a dimensionality of attribute space, we shall use the HISCRA-M method and define several different reduced spaces. In each of these spaces, we shall rank objects according to their features with different methods of decision making and build a generalized ranking of objects. Then, using one of procedures for aggregating ranks, we shall find the final ordering of objects. First, pass from initial attribute scales to scales with a fewer number of gradations. For simplicity, let us assume that a shortened scale of any{ indicator consists} of three estimate grades. Reduce the five-point scales X i = xi1 , xi2 , xi3 , xi4 , xi5 of K 1 , . . . , K 8 and introduce the shortened three-point scales Q i = } { 0the 1attributes qi , qi , qi2 . Here qi0 is 0/high mark, which includes the marks xi5 − 5/excellent and xi4 −4/good; qi1 is 1/middle mark, which corresponds to the mark xi3 −3/satisfactory; qi2 is 2/low mark, which includes the marks xi2 − 2/ bad and xi1 − 1/very bad. Obviously, the new grades of estimates are also ordered: qi0 . qi1 . qi2 . In the attribute space K 1 , . . . , K 8 , versions O O Op , p = p ,( p of the object )

1, . . . , 10 can be specified by vectors/tuples x p = x p1 , . . . , x p8 of estimates

6.4 Demonstrative Example: Technology PAKS-M

153

) (

on initial scales and q p = q p1 , . . . , q p8 , s = 1, 2 of estimates on shortened scales (Table 5.2). However, as noted earlier, it is impossible to aggregate these vectors/tuples and (represent the )object O p “as a whole” into the form ( ) x p = x p1 , . . . , x p8 or q p = q p1 , . . . , q p8 . Define the object Op by multisets (5.18) and (5.19) ( ) ( ) A p = {k A p x11 ◦ x11 , . . . , k A p x15 ◦ x15 ; . . . ; ( ) ( ) k A p x81 ◦ x81 , . . . , k A p x85 ◦ x85 }, ( ) ( ) ( ) B p = {k B p q10 ◦ q10 , k B p q11 ◦ q11 , k B p q12 ◦ q12 ; . . . ; ( ) ( ) ( ) k B p q80 ◦ q80 , k B p q81 ◦ q81 , k B p q82 ◦ q82 }, respectively, over the set X 1 ∪. . .∪ X 8 of grades of initial scales and the set Q 1 ∪. . .∪

Q 8 of grades of shortened scales, where each of multisets is a sum: A p = A p + Ap , ( ) ei

B p = B p + B p . Multiplicities k Ap x i ,i( = )1, . . . , 8, ei = 1, . . . , 5 are rows of the matrix H (Table 3.5). Multiplicities k Bp qioi ,oi = 0, 1, 2 are rows of the matrix H0 (Table 5.4) and specified by rules (5.11): ( ) ( ) ( ) k B p qi0 = k A p xi5 + k A p xi4 , ( ) ( ) k B p qi1 = k A p xi3 , ( ) ( ) ( ) k B p qi2 = k A p xi2 + k A p xi1 . We aggregate the new indicators in four different ways, as shown in Fig. 5.5, and consider five schemes for indicator aggregation, assuming a transition to the shortened attribute scales as the zero scheme. Next, we shall solve the initially stated task of ordering of multi-attribute objects for each g-th scheme for indicator aggregation (g = 0, 1, 2, 3, 4), presenting the objects with appropriative multisets. For a greater validity of the final result, we shall rank the objects using three different methods of group choice: ARAMIS, lexicographic ordering, weighted sums of estimate grades. Then we combine collective rankings obtained by different methods for each g-th scheme into a generalized group ranking of objects with the Borda voting procedure. Finally, we construct the final aggregated ordering of objects, which combines the generalized group rankings obtained for five different indicator aggregation schemes with the Goodman–Markowitz procedure. For each object O p , p = 1, . . . , 10, calculate the following values. For the ARAMIS method, we calculate a value of proximity index l gp of the object O p to the best object O+ , which are specified by multisets G p and G + in the g-th scheme, namely: G + = B + for g = 0; G + = C + for g = 1; G+ = D+ for g = 2; G + = E + for g = 3; G + = F + for g = 4. For the lexicographic ordering method, we calculate a place r gp of the object di ( ) . o O p , which is determined by sum vgp = k G p aioi of estimate multiplicities with i=1

grades aioi , oi = 0, 1, 2 in the g-th scheme, namely: aioi = qioi , di = 8 for g = 0;

154

6 Multicriteria Choice in Attribute Space of High Dimensionality

aioi = yioi ,di = 4 for g = 1; aioi = u ioi , di = 2 for g = 2; aioi = z 1oi , di = 1 for g = 3; aioi = z 2oi , di = 1 for g = 4. .2 0 o For the method of weighted sums, we calculate sum vgp = o=0 w vgp of partial value functions of the object Op estimate grades in the g-th scheme, where significances of grades are equal to w0 = 3, w 1 = 2, w 2 = 1. . f For the Borda voting procedure, we calculate the Borda score bgp = 3f =1 bgp f

for the object O p , where bgp is a score of O p , in the f -th collective ranking obtained, respectively, by the methods ARAMIS, lexicographic ordering, weighted sums of estimate grades. According to the zero scheme for indicator aggregation, the object O p is represented { 0 1 by2 }multiset Bp (5.19) of estimates with shortened verbal scales Q i = qi , qi , qi , i = 1, . . . , 8 of attributes K 1 , . . . , K 8 . In the reduced attribute space, the best O+ and worst O− objects are specified by multisets of estimates B + = {2 ◦ q10 , 0 ◦ q11 , 0 ◦ q12 ; 2 ◦ q20 , 0 ◦ q21 , 0 ◦ q22 ; 2 ◦ q30 , 0 ◦ q31 , 0 ◦ q32 ; 2 ◦ q40 , 0 ◦ q41 , 0 ◦ q42 ; . 2 ◦ q50 , 0 ◦ q51 , 0 ◦ q52 ; 2 ◦ q60 , 0 ◦ q61 , 0 ◦ q62 ; 2 ◦ q70 , 0 ◦ q71 , 0 ◦ q72 ; 2 ◦ q80 , 0 ◦ q81 , 0 ◦ q82 } B − = {0 ◦ q10 , 0 ◦ q11 , 2 ◦ q12 ; 0 ◦ q20 , 0 ◦ q21 , 2 ◦ q22 ; 0 ◦ q30 , 0 ◦ q31 , 2 ◦ q32 ; 0 ◦ q40 , 0 ◦ q41 , 2 ◦ q42 ; . 0 ◦ q50 , 0 ◦ q51 , 2 ◦ q52 ; 0 ◦ q60 , 0 ◦ q61 , 2 ◦ q62 ; 0 ◦ q70 , 0 ◦ q71 , 2 ◦ q72 ; 0 ◦ q80 , 0 ◦ q81 , 2 ◦ q82 } Calculated values for objects O1 , . . . , O10 are shown in Table 6.2. Collective gr rankings of objects, obtained by the ARAMIS method R 0A , lexicographic ordering gr gr R 0. , weighted sums of estimates R 0. , are as follows:

R 0grA ⇔ [O1 , O6 . O5 ] . [O4 , O8 . O10 ] . [(O9 . O7 ) . O3 . O2 ], (6.3) l0 p · 10−3 0 59 211 263 556 565 684 700. gr R 0. ⇔ [O1 , O6 . O5 ] . [O4 , O8 . O10 ] . [(O7 . O3 ) . O2 . O9 ], r0 p 1−2 3 4−5 6 7 8 9 10.

(6.4)

gr R 0. ⇔ [O1 , O6 . O5 ] . [O4 , O8 . O10 ] . [(O7 . O9 ) . O3 . O2 ], v0 p 48 47 43 41 27 25 24.

(6.5)

g

The generalized group ranking R0 of objects, which combines the collective gr gr gr rankings R0A , R0. , R0 . , has the form:

R 0gr ⇔ [O1 , O6 . O5 ] . [O4 , O8 . O10 ] . [(O7 . O9 ) . O3 . O2 ], b0 p 25.5 21 16.5 12 7.5 5.5 4 1.

(6.6)

6.4 Demonstrative Example: Technology PAKS-M

155

Table 6.2 Calculated values for objects represented by multisets (zero aggregation scheme) H0

O1

O2

O3

O4

O5

O6

O7

O8

O9

O10

l 0p

0.000

0.700

0.684

0.211

0.059

0.000

0.565

0.211

0.556

0.253

v0p o 16, 0, 0 2, 4, 10 3, 3, 10 12, 3, 1 15, 1, 0 16, 0, 0 3, 5, 8 12, 3, 1 1, 9, 6 11, 3, 2 r 0p

1–2

9

8

4–5

3

1–2

7

4–5

10

6

v0p

48

24

25

43

47

48

27

43

27

41

b0p

25.5

1

4

16,5

21

25.5

7.5

16.5

5.5

12

gr

For the object Op , a proximity indicator l0p is given below the ranking R0A (6.3); a gr lexicographic place r0 p is given below the ranking R0. (6.4); a weighted sum v0 p gr 0 1 2 of partial value functions v0 p , v0 p , v0 p is given below the ranking R0 . (6.5). The Borda score b0p , averaged for objects with the same places, is indicated below the gr ranking R0 (6.6). Closed objects are enclosed in round brackets, distant groups of objects are enclosed in square brackets. gr gr gr gr The rankings R0A , R0 . , R0 are quite similar. The ranking R0. is somewhat gr gr gr gr different from them. Note that head parts of all rankings R0A , R0. , R0 . , R0 completely coincide. So, according to the zero aggregation scheme, the pupils O1 , O6 , O5 have the high marks, pupils O4 , O8 , O10 have the middle marks, pupils O7 , O9 , O3 , O2 have the low marks. According to the{first aggregation scheme (Fig. 5.5a), all attributes K 1 , . . . , K 8 } with scales Q i = qi0 , qi1 , qi2 , i = 1, . . . , 8 are combined into final indicators L 1 = (K 1 , K 2 ) Physical–Mathematical disciplines, L 2 = (K 3 , K 4 ) Chemical– Biological disciplines, L 3 = (K 5 , K 6 ) Social disciplines, L 4 = (K 7 , K 8 ) Philological disciplines. The composite indicators L 1 , . . . , L 4 have rating scales Y j = { } ) ( o 0 1 2 y j , y j , y j , j = 1, 2, 3, 4 with verbal grades y j j = qaea , qcec , o j , ea , ec = 0, 1, 2. Here a = 1,c = 2 for j = 1; a = 3, c = 4 for j = 2; a = 5, c = 6 for j = 3; a = 7, c = 8 for j = 4. In the reduced space Y1 ×Y2 ×Y3 ×Y4 , versions O , s = 1, 2 of the object Op are ) p (



given by vectors/tuples y p = y p1 , y p2 , y p3 , y p4 , (Table 5.3). But it is impossible ) ( to specify the object Op with a single vector/tuple y p = y p1 , y p2 , y p3 , y p4 . In the reduced space Y1 ∪ Y2 ∪ Y3 ∪ Y4 , represent the object O p by a multiset (5.20) ( ) ( ) ( ) C p = {k C p y10 ◦ y10 , k C p y11 ◦ y11 , k C p y12 ◦ y12 ; . . . ; ( ) ( ) ( ) k C p y40 ◦ y40 , k C p y41 ◦ y41 , k C p y42 ◦ y42 } over the set Y1 ∪. . .∪Y4 of estimate grades of the indicators L(1 , . .). , L 4 . Each multiset oj

is given by a sum: C p = C p + C p . Multiplicities kC p y j , j = 1, 2, 3, 4 are rows of the matrix H1 (Table 6.3) and are determined by a rule

156

6 Multicriteria Choice in Attribute Space of High Dimensionality

Table 6.3 Calculated values for objects represented by multisets (first aggregation scheme) H1

y1 0 y1 1 y1 2 y2 0 y2 1 y2 2 y1 0 y3 1 y3 2 y4 0 y4 1 y4 2 l 1p

v1p 0 ,v1p 1 ,v1p 2 r 1p

C1

2 0 0

2 0 0

2 0 0

2 0 0

0.000 8

0

0

1–2 24

25.5

C2

0 1 1

0 0 2

0 1 1

0 0 2

0.800 0

2

6

10

10

0

C3

0 0 2

0 0 2

0 2 0

0 1 1

0.727 0

3

5

7–9 11

6

C4

1 1 0

0 2 0

2 0 0

1 1 0

0.333 4

4

0

4–5 20

16.5

C5

2 0 0

1 1 0

2 0 0

2 0 0

0.111 7

1

0

3

21

C6

2 0 0

2 0 0

2 0 0

2 0 0

0.000 8

0

0

1–2 24

25.5

C7

0 1 1

0 1 1

0 1 1

0 0 2

0.727 0

3

5

7–9 11

6

C8

2 0 0

0 2 0

1 1 0

1 1 0

0.333 4

4

0

4–5 20

16.5

C9

0 1 1

0 0 2

0 1 1

0 1 1

0.727 0

3

5

7–9 11

6

C 10 1 1 0

1 1 0

0 2 0

1 1 0

0.385 3

5

0

6

12

v1p b1p

23

19

( ) ( ) ( ) o o o

kC p y j j = kC p y j j + kC p y j j . Here the grade yj oj consists of estimate grades qaea , qcec of the initial indicators K a , K c , which form the composite indicator L j . The best O+ and worst O– objects are described by multisets of estimates C + = {2 ◦ y10 , 0 ◦ y11 , 0 ◦ y12 ; 2 ◦ y20 , 0 ◦ y21 , 0 ◦ y22 ; 2 ◦ y3 , 0 ◦ y31 , 0 ◦ y32 ; 2 ◦ y4 , 0 ◦ y41 , 0 ◦ y42 }

,

C − = {0 ◦ y10 , 0 ◦ y11 , 2 ◦ y12 ; 0 ◦ y20 , 0 ◦ y21 , 2 ◦ y22 ;

.

0 ◦ y30 , 0 ◦ y31 , 2 ◦ y32 ; 0 ◦ y40 , 0 ◦ y41 , 2 ◦ y42 }

Calculated values for objects O1 , . . . , O10 are shown in Table 6.3. Collective gr rankings of objects, obtained by the ARAMIS method R 1A , lexicographic ordering gr gr. R 1. , weighted sums of estimates R 1 , are as follows: gr R 1A ⇔ [O1 , O6 . O5 ] . [O4 , O8 . O10 ] . [O3 , O7 , O9 . O2 ], l1 p · 10−3 0 111 333 385 727 800.

(6.7)

gr R 1. ⇔ [O1 , O6 . O5 ] . [O4 , O8 . O10 ] . [O3 , O7 , O9 . O2 ], r1 p 1−2 3 4−5 6 7−9 10.

(6.8)

R 1gr. ⇔ [(O1 , O6 . O5 )] . [(O4 , O8 . O10 )] . [(O3 , O7 , O9 . O2 )]. r1 p 24 23 20 19 11 10.

(6.9)

gr

The generalized group ranking R 1 of objects, which combines the collective gr gr gr rankings R 1A , R 1. , R 1 . , has the form:

6.4 Demonstrative Example: Technology PAKS-M

157

R 1gr ⇔ [O1 , O6 . O5 ] . [O4 , O8 . O10 ] . [O3 , O7 , O9 . O2 ]. b1 p 25.5 21 16.5 12 6 0.

(6.10) gr

For the object Op , a proximity indicator l1 p is given below the ranking R 1A (6.7); gr a lexicographic place r1 p is given below the ranking R 1. (6.8); a weighted sum gr v1p of partial value functions v10 p , v11 p , v12 p is given below the ranking R1 . (6.9). The Borda score b1 p , averaged for objects with the same places, is indicated below gr the generalized ranking R 1 (6.10). Closed objects are enclosed in round brackets, distant groups of objects are enclosed in square brackets. gr gr gr gr The rankings R 1A , R 1. , R 1 . , R 1 completely coincide. So, according to the first aggregation scheme, the pupils O1 , O6 , O5 have the high marks, pupils O4 , O8 , O10 have the middle marks, pupils O3 , O7 , O9 , O2 have the low marks. These results are largely consistent with the results obtained at the zero scheme for indicator aggregation. According to the second aggregation scheme (Fig. 5.5b), firstly all attributes K 1 , . . . , K 8 are combined into indicators L 1 , . . . , L 4 , which at the next stage form the final indicators M1 = (L 1 , L 2 ) Natural disciplines and M2 = (L 3 , L 4 ) Humanitarian { 0 1 2} M 1, M disciplines. The composite indicators 2 have rating scales Ur = u r , u r , u r , ) ( r = 1, 2 with verbal grades u ror = ybeb , yded , or , eb , ed = 0, 1, 2. Here b = 1, d = 2 for r = 1; b = 3, d = 4 for r = 2.

O p ,s = 1, 2 of the object Op are given In the reduced space U(1 × U2 , versions )



by vectors/tuples u p = u p1 , u p2 (Table 5.3). But it is impossible to specify the ) ( object Op with a single vector/tuple u p = u p1 , u p2 . In the reduced space U1 ∪ U2 , represent the object Op by a multiset (5.21) ( ) ( ) ( ) D p = {k D p u 01 ◦ u 01 , k D p u 11 ◦ u 11 , k D p u 21 ◦ u 21 ; ( ) ( ) ( ) k D p u 02 ◦ u 02 , k D p u 12 ◦ u 12 , k D p u 22 ◦ u 22 }

over the set U1 ∪ U2 of estimate grades of the indicators(M 1), M 2 . Each multiset is + D . Multiplicities k D p u ror , r = 1, 2 are rows of given by a sum: D p = D p p the matrix H2 (Table 6.4) and are determined by a rule ( ) ( or ) ) ( k D p u ror = k + k D p u ror . D p ur Here the grade u ror consists of estimate grades ybeb , yded of the indicators L b , L d , which form the composite indicator M r . The best O+ and worst O– objects are described by multisets of estimates } { D+ = 2 ◦ u 01 , 0 ◦ u 11 , 0 ◦ u 21 ; 2 ◦ u 02 , 0 ◦ u 12 , 0 ◦ u 22 , } { D− = 0 ◦ u 01 , 0 ◦ u 11 , 2 ◦ u 21 ; 0 ◦ u 02 , 0 ◦ u 12 , 2 ◦ u 22 . Calculated values for objects O1 ,…,O10 are shown in Table 6.4. Collective rankgr gr ings of objects, obtained by the ARAMIS method R 2 A , lexicographic ordering R 2. ,

158

6 Multicriteria Choice in Attribute Space of High Dimensionality

Table 6.4 Calculated values for objects represented by multisets (second aggregation scheme) H2

u1 0 u1 1 u1 2

u2 0 u2 1 u2 2

l 2t

v2t 0 , v2t 1 , v2t 2

r 2t

v2t

b2p

D1

2

0

0

2

0

0

0.000

4

0

0

1–2

12

25.5

D2

0

0

2

0

0

2

1.000

0

0

4

8–10

4

3

D3

0

0

2

0

1

1

0.800

0

1

3

7

5

9

D4

0

2

0

1

1

0

0.429

1

3

0

4–6

9

15

D5

1

1

0

2

0

0

0.200

3

1

0

3

11

21

D6

2

0

0

2

0

0

0.000

4

0

0

1–2

12

25.5

D7

0

0

2

0

0

2

1.000

0

0

4

8–10

4

3

D8

0

2

0

1

1

0

0.429

1

3

0

4–6

9

15

D9

0

0

2

0

0

2

1.000

0

0

4

8–10

4

3

D10

1

1

0

0

2

0

0.429

1

3

0

4–6

9

15

gr

weighted sums of estimates R 2. , are as follows:

R 2grA ⇔ [O1 , O6 . O5 ] . [(O4 , O8 , O10 )] . [O3 . O2 , O7 , O9 ], l1 p · 10−3 0 200 429 800 1000 .

(6.11)

gr R 2. ⇔ [O1 , O6 . O5 ] . [O4 , O8 . O10 ] . [(O3 . O7 ) . O7 . O9 ], (6.12) r2 p 1−2 3 4−6 7 8 − 10 . gr R 2. ⇔ [O1 , O6 . O5 ] . [O4 , O8 , O10 ] . [O3 . O2 , O7 , O9 ]. v2 p 12 11 9 5 4 .

(6.13)

The generalized group ranking R 2 gr of objects, which combines the collective gr gr rankings R 2A gr , R 2. , R 2. , has the form:

R 2gr ⇔ [O1 , O6 . O5 ] . [O4 , O8 , O10 ] . [O3 . O2 , O7 , O9 ]. b2 p 25.5 21 15 9 3 .

(6.14)

For the object Op , a proximity indicator l 2p is given below the ranking R 2A gr (6.11); gr a lexicographic place r 2p is given below the ranking R 2. (6.12); a weighted sum v2p gr 0 1 2 of partial value functions v2p , v2p , v2p is given below the ranking R2. (6.13). The Borda score b2p , averaged for objects with the same places, is indicated below the generalized ranking R 2 gr (6.14). Closed objects are enclosed in round brackets, distant groups of objects are enclosed in square brackets. gr gr The rankings R 2A gr , R 2. , R 2. , R 2 gr completely coincide. So, according to the second aggregation scheme, the pupils O1 , O6 , O5 have the high marks, pupils O4 , O8 , O10 have the middle marks, pupils O3 , O7 , O9 , O2 have the low marks. These results are practically the same as the results obtained at the first scheme for indicator aggregation, and are largely consistent with the results obtained at the zero scheme for indicator aggregation. However, in the generalized ranking R 2 gr (6.14), the order

6.4 Demonstrative Example: Technology PAKS-M

159

of objects, included in the rating groups ‘middle’ and ‘low’, is somewhat different from the order in the rankings R0 gr (6.6) and R 1 gr (6.10). According to the third aggregation scheme (Fig. 5.5c), firstly all attributes K 1 ,…,K 8 are combined into indicators L 1 ,…,L 4 , which then are combined into indicators M 1 , M 2 . At the last stage, these indicators form the final integral indicator N 1 = (M 1 , M 2 ) Academic score that has a scale Z 1 = {z1 0 , z1 1 , z1 2 } with verbal grades z 1o1 = (u e11 , u e22 ), o1 , e1 , e2 = 0, 1, 2. A version Op , s = 1, 2 of the object Op is characterized by the estimate zp1 upon the single indicator N 1 (Table 5.3). Represent the object Op by a multiset (5.22) { ( ) ( ) ( ) } E p = k E p z 10 ◦ z 10 , k E p z 11 ◦ z 11 , k E p z 12 ◦ z 12 over the set Z 1 of estimate grades of the indicator ( ) N 1 , which is given by a sum: Ep = Ep + Ep . Multiplicities k Ep z 1o1 ) are rows of the matrix H3 (Table 6.5) and are determined by a rule ( ) ( o1 ) ( ) o1 k E p z 1o1 = k E p z1 + k E p z1 . Here the grade z 1o1 consists of estimate grades u e11 , u e22 of the indicators M 1 , M 2 , that form the integral indicator N 1 . The best O+ and worst O– objects are described by multisets of estimates } { E + = 2 ◦ z 10 , 0 ◦ z 11 , 0 ◦ z 12 ,

} { E − = 0 ◦ z 10 , 0 ◦ z 11 , 2 ◦ z 12 .

Calculated values for objects O1 ,…,O10 are shown in Table 6.5. Collective rankgr gr ings of objects, obtained by the ARAMIS method R 3A , lexicographic ordering R 3. , gr weighted sums of estimates R 3. , are as follows: Table 6.5 Calculated values for objects represented by multisets (third aggregation scheme) H3

z1 0 z1 1 z1 2

l 3p

r 3p

v3p

b3p

E1

2

0.000

1–2

6

25.5

0

0

E2

0

0

2

1.000

7–10

2

4.5

E3

0

0

2

1.000

7–10

2

4.5

E4

0

2

0

0.500

4–6

4

15

E5

1

1

0

0.333

3

5

21

E6

2

0

0

0.000

1–2

6

25.5

E7

0

0

2

1.000

7–10

2

4.5

E8

0

2

0

0.500

4–6

4

15

E9

0

0

2

1.000

7–10

2

4.5

E10

0

2

0

0.500

4–6

4

15

160

6 Multicriteria Choice in Attribute Space of High Dimensionality gr R 3A ⇔ [O1 , O6 ] . [O5 ] . [O4 , O8 , O10 ] . [O3 , O7 , O9 , O2 ], l3 p · 10−3 0 333 500 1000 .

(6.15)

gr R 3. ⇔ [O1 , O6 ] . [O5 ] . [O4 , O8 , O10 ] . [O3 , O7 , O9 , O2 ], r3 p 1−2 3 4−6 7 − 10 .

(6.16)

gr R 3. ⇔ [O1 , O6 ] . [O5 ] . [O4 , O8 . O10 ] . [O3 . O7 . O9 . O2 ]. (6.17) v3 p 6 5 4 2 .

The generalized group ranking R 3 gr of objects, which combines the collective gr gr rankings R 3A gr , R 3. , R 3. , has the form:

R 3gr ⇔ [O1 , O6 ] . [O5 ] . [O4 , O8 , O10 ] . [O3 , O7 , O9 , O2 ]. b3 p 25.5 21 15 4.5 .

(6.18)

For the object Op , a proximity indicator l 3p is given below the ranking R 3A gr (6.15); gr a lexicographic place r 3p is given below the ranking R 3. (6.16); a weighted sum v3p gr 0 1 2 of partial value functions v3p , v3p , v3p is given below the ranking R 3. (6.17). The Borda score b3p , averaged for objects with the same places, is indicated below the generalized ranking R 3 gr (6.18). Distant groups of objects are enclosed in square gr gr brackets. The rankings R 3A gr , R 3. , R 3. , R 3 gr completely coincide. So, according to the third aggregation scheme, the pupils O1 , O6 , O5 have the high marks, pupils O4 , O8 , O10 have the middle marks, pupils O3 , O7 , O9 , O2 have the low marks. These results are practically the same as the results obtained at the first and second schemes for indicator aggregation, and are largely consistent with the results obtained at the zero scheme for indicator aggregation. However, in the generalized ranking R 3 gr (6.18), in contrast to the rankings R 0 gr (6.6), R 1 gr (6.10), R 2 gr (6.14), the objects, included in the rating groups ‘middle’ and ‘low’, occupy the same places in each group. According to the fourth aggregation scheme (Fig. 5.5d), firstly all attributes K 1 ,…,K 8 are combined into indicators L 1 ,…,L 4 , which then form the final { integral} indicator N2 = (L 1 , L 2 , L (3 , L 4 ) Academic )score that has a scale Z 2 = z 20 , z 21 , z 22 with verbal grades z 2o2 = y1e1 , y2e2 , y3e3 , y4e4 , o2 , e1 , e2 , e3 , e4 = 0, 1, 2. A version

Op , s = 1, 2 of the object Op is characterized by an estimate z p2 upon the single indicator N 2 (Table 5.3). Represent the object Op by a multiset (5.23) ( ) ( ) } { ( ) F p = k F p z 20 ◦ z 20 , k F p z 21 ◦ z 21 , k F p z 22 ◦ z 22 over the set Z 2 of estimate grades of the N 2 , which is given by a sum: ( oindicator )

2 z are rows of the matrix H4 (Table 6.6) + F . Multiplicities k F p = F Fp 2 p p and are determined by rule ( ) ( o2 ) ( ) o2 k F p z 2o2 = k F p z2 + k F p z2 .

6.4 Demonstrative Example: Technology PAKS-M

161

Table 6.6 Calculated values for objects represented by multisets (fourth aggregation scheme) H4

z1 0 z1 1 z1 2

l 4p

r 4p

v4p

b4p

F1

2 0 0

0.000

1–3

6

24

F2

0 0 2

1.000

7–10

2

4.5

F3

0 0 2

1.000

7–10

2

4.5

F4

0 2 0

0.500

6

4

12

F5

2 0 0

0.000

1–3

6

24

F6

2 0 0

0.000

1–3

6

24

F7

0 0 2

1.000

7–10

2

4.5

F8

1 1 0

0.333

4–5

5

16.5

F9

0 0 2

1.000

7–10

2

4.5

F10

1 1 0

0.333

4–5

5

16.5

Here the grade z 2o2 consists of estimate grades y1e1 , y2e3 , y3e3 , y4e4 , of the indicators L 1 , L 2 , L 3 , L 4 , that form the integral indicator N 2 . The best O+ and worst O– objects are described by multisets of estimates { } F + = 2 ◦ z 20 , 0 ◦ z 21 , 0 ◦ z 22 ,

{ } F − = 0 ◦ z 20 , 0 ◦ z 21 , 2 ◦ z 22 .

Calculated values for objects O1 ,…,O10 are shown in Table 6.6. Collective rankings of objects, obtained by the ARAMIS method R 4A gr , lexicographic ordering gr gr R 4. , weighted sums of estimates R4. are as follows: gr R 4A ⇔ [O1 , O5 , O6 ] . [O8 , O10 . O4 ] . [O3 , O7 , O9 , O2 ], l4 p · 10−3 0 333 500 1000 .

(6.19)

gr R 4. ⇔ [O1 , O5 , O6 ] . [O8 , O10 . O4 ] . [O3 , O7 , O9 , O2 ], r4 p 1−3 4−5 6 7 − 10 .

(6.20)

gr R 4. ⇔ [O1 , O5 , O6 ] . [O8 , O10 . O4 ] . [O3 , O7 , O9 , O2 ]. v4 p 6 5 4 2 .

(6.21)

The generalized group ranking R 4 gr of objects, which combines the collective gr gr rankings R 4A gr , R 4. , R 4. , has the form:

R 4gr ⇔ [O1 , O5 , O6 ] . [O8 , O10 . O4 ] . [O3 , O7 , O9 , O2 ]. b4 p 24 16.5 12 4.5 .

(6.22)

For the object Op , a proximity indicator l 4p is given below the ranking R 4A gr (6.19); gr a lexicographic place r 4p is given below the ranking R4. (6.20); a weighted sum v4p gr 0 1 2 of partial value functions v4p , v4p , v4p is given below the ranking R4. (6.21).

162

6 Multicriteria Choice in Attribute Space of High Dimensionality

The Borda score b4p , averaged for objects with the same places, is indicated below the generalized ranking R 4 gr (6.22). Distant groups of objects are enclosed in square brackets. gr gr The rankings R 4A gr , R 4. , R 4. , R 4 gr completely coincide. So, according to the fourth aggregation scheme, the pupils O1 , O6 , O5 have the high marks, pupils O8 , O10 , O4 have the middle marks, pupils O3 , O7 , O9 , O2 have the low marks. These results are practically the same as the results obtained at the first, second and third schemes for indicator aggregation, and are largely consistent with the results obtained at the zero scheme for indicator aggregation. However, in the generalized ranking R 4 gr (6.22), in contrast to the rankings R 0 gr (6.6), R 1 gr (6.10), R 2 gr (6.14), R 3 gr (6.18), the objects, included in the rating groups ‘high’ and ‘low’, occupy the same places in each group. Let us further combine the generalized group rankings R 0 gr , R 1 gr , R 2 gr , R 3 gr , R 4 gr , obtained with different indicator aggregation schemes, into the single aggreagg gated ranking of objects R . using the Goodman–Markowitz procedure. For convenience, let us write the generalized rankings R 0 gr , R 1 gr , R 2 gr , R 3 gr , R 4 gr representing different schemes for aggregation indicators:

R 0gr ⇔ [O1 , O6 . O5 ] . [O4 , O8 . O10 ] . [O7 . O9 . O3 . O2 ]. n0 p 1.5 3 4.5 6 7 8 9 10.

(6.23)

R 1gr ⇔ [O1 , O6 . O5 ] . [O4 , O8 . O10 ] . [O3 , O7 , O9 . O2 ]. n1 p 1.5 3 4.5 6 8 10.

(6.24)

R 2gr ⇔ [O1 , O6 . O5 ] . [O4 , O8 . O10 ] . [O3 . O2 , O7 , O9 ]. n2 p 1.5 3 5 7 9 .

(6.25)

R 3gr ⇔ [O1 , O6 ] . [O5 ] . [O4 , O8 . O10 ] . [O3 , O7 , O9 , O2 ]. n3 p 1.5 3 5 8.5 .

(6.26)

R 4gr ⇔ [O1 , O5 , O6 ] . [O8 , O10 . O4 ] . [O3 , O7 , O9 , O2 ]. n4 p 2 4.5 6 8.5 .

(6.27)

agg

The aggregated ranking R . of objects, which combines the generalized rankings R 0 gr , R 1 gr , R 2 gr , R 3 gr , R 4 gr , has the form:

R agg . ⇔ [O1 , O6 . O5 ] . [O8 . O4 . O10 ] . [(O3 , O7 . O9 ) . O2 ]. (6.28) n .p 8 14 23.5 25 26.5 41 42 46. For the object Op , ranks ngp , averaged for objects with the same places, are indicated below the generalized rankings R gr g , g = 0, 1, 2, 3, 4 (6.23)–(6.27); a rank n. p , 4 . determined by rule (2.3) for a sum of places n .p = n gp , is indicated below the g=0

6.4 Demonstrative Example: Technology PAKS-M

163

agg

aggregated ranking R . (6.28). Closed objects are enclosed in round brackets, and distant groups of objects are enclosed in square brackets. Let us now discuss the results obtained. With the sequential aggregation of indicators, a dimensionality of the transformed attribute spaces decreases from 40 to 24, 12, 6, 3. The total number of pupil marks in all subjects, expressed by the cardinality of multisets Ap (5.7), Bp (5.19), C p (5.20), Dp (5.21), Ep (5.22), Fp (5.23), also decreases from 16 to 8, 4, 2. Collective preferences of many various groups of experts (methods and ways for aggregating indicators and their scales), which are represented by different orderings of objects, almost completely coincide, with the exception of insignificant differences in the locations of objects included in some rankings. In all collective rankings, there are coinciding groups of objects. These are the objects O1 , O6 , O5 with the high score, objects O4 , O8 , O10 with the middle score, objects O3 , O7 , O9 , O2 with the low score. According to the aggregated estimates of all experts upon all indicators, the objects O1 , O6 , taking the first places in all rankings, are the best. The object O2 , taking the last place in all rankings, is the worst. There are clearly expressed gaps between the groups of objects with high, middle low scores. Therefore, the collective orderings R 0 gr , R 1 gr , R 2 gr , R 3 gr , R 4 gr of objects can also be considered as collective ordinal classifications of objects, where the classes of objects and positions of objects within the classes correspond to their places in the rankings. For comparison, let us present two collective rankings of objects obtained with the RAMPA method of paired comparisons R P gr (3.13) and with the ARAMIS method R A+ gr (3.18), where objects Op are given by multisets Ap (5.8) of estimates by the initial attributes K 1 ,…,K 8 :

R gr p ⇔ (O1 . O6 ) . (O5 . O4 ) . (O8 . O10 ) . (O9 . O3 , O7 ) . O2 , 242 234 210 202 194 183 87 86 74. bP p (6.29) R Agr+ ⇔ O1 . O6 . (O4 , O8 , O10 . O5 ) . O9 . (O7 . O2 ) . O3 . lA p · 10−3 304 360 407 429 515 552 571 652.

(6.30)

For the object Op , a value of row sum bPp of elements of the resulting matrix B of gr pairwise comparisons is indicated below the ranking R P (6.29); a value of proximity gr indicator lAp is indicated below the ranking R A+ (6.30). Closed objects are enclosed gr gr in round brackets. The rankings R P and R A+ , obtained using the RAMPA, ARAMIS methods, and the ranking R.agg (6.28), obtained using the PAKS-M technology, coincide in general, although there are minor differences between them. So, the PAKS-M technology is a tool for solving multicriteria choice problems in high-dimensional attribute spaces. For this, we build several hierarchical schemes of indicators, apply different methods for constructing scales of composite indicators

164

6 Multicriteria Choice in Attribute Space of High Dimensionality

and an integral indicator, use several different methods of decision making. In fact, we carry out a multiple collective multicriteria choice. Efficiency of the PAKS-M technology for solving multicriteria choice tasks of high dimensionality is evaluated similarly to efficiency of the PAKS technology. An important feature of the PAKS-M technology is ability to combine initial numerical and/or verbal attributes into different final indicators with various levels of aggregation up to a single integral indicator, to solve the given choice task in variety of ways, to give a clear explanation and justification for the most preferable option. During solving the choice task, a decision maker/expert may face inconsistencies and contradictions of the results obtained. Such situations are caused by various reasons, in particular, unsuccessful combination of indicators or unfortunate formation of scale grades of composite and/or integral indicators. Specification of semantic links between the initial and composite indicators plays an important role in the construction of aggregation trees. New technologies for solving multicriteria choice tasks in high-dimensional spaces have some important features that increase validity of the results. Several hierarchical schemes with different options for aggregating indicators are formed. In procedures for reducing dimensionality of attribute space, various methods and/or their combinations are used. The initial choice task can be solved by many decision making methods. Reasonable explanations of the results obtained, which are understandable for a decision maker/expert, can be given. This allows a person to find the most suitable decision, or jointly apply several different ways for the problem solution.

References 1. Petrovsky, A.B.: Gruppovoy verbal’niy analiz resheniy (Group Verbal Decision Analysis). Nauka, Moscow (2019).(in Russian) 2. Petrovsky, A.B., Roizenzon, G.V.: Mnogokriterial’niy vybor s umen’sheniem razmernosti prostranstva priznakov: mnogoetapnaya tekhnologiya PAKS (Multiple criteria choice with reducing dimension of attribute space: multi-stage technology PAKS). Iskusstvenniy intellekt i prinyatie resheniy (Artificial Intelligence and Decision Making) 4, 88–103 (2012). (in Russian) 3. Petrovsky, A.B., Royzenson, G.V.: Multi-stage technique ‘PAKS’ for multiple criteria decision aiding. Int. J. Inf. Technol. Decis. Mak. 12(5), 1055–1071 (2013) 4. Petrovsky, A.B.: Multi-method technology for multi-attribute expert evaluation. In: Proceedings of the First International Conference Intelligent Information Technologies for Industry. Advances in Intelligent Systems and Computing, vol. 451, 2, pp. 199–208. Springer International Publishing, Switzerland (2016) 5. Petrovsky, A.B., Lobanov, V.N.: Mnogokriterial’niy vybor slozhnoy tekhnicheskoy sistemy po agregirovannym pokazatelyam (Multicriteria choice of a complex technical system based on aggregated indicators). Vestnik Rostovskogo gosudarstvennogo universiteta putey soobshcheniya (Bulletin of the Rostov State Transport University) 3, 79–85 (2013) (in Russian) 6. Petrovsky, A.B., Lobanov, V.N.: Selection of complex system in the reduced multiple criteria space. World Appl. Sci. J. 29(10), 1315–1319 (2014) 7. Petrovsky, A.B., Lobanov, V.N.: Multi-criteria choice in the attribute space of large dimension: multi-method technology PAKS-M. Sci. Tech. Inf. Process. 42(5), 76–86 (2015)

References

165

8. Petrovsky, A.B.: Hierarchical aggregation of object attributes in multiple criteria decision making. In: Artificial Intelligence: Proceedings of the 16th Russian conference. Communications in Computer and Information Science, vol. 934, pp. 125–137. Springer Nature, Switzerland AG (2018).

Chapter 7

Practical Applications of Choice Methods

Developed original methods of verbal decision analysis, allowing to operate with heterogeneous characteristics of objects, were successfully applied in many practical problems of multicriteria choice in various areas. This chapter includes examples of such tasks. These are analysis of science policy options, evaluation of topicality and priority of scientific directions and problems, formation of a scientific and technical program, competition of projects in a scientific foundation. The information required for solving such problems is contained in various documents, data and knowledge bases, can be obtained from experts and mainly in qualitative form.

7.1 Analysis of Science Policy Options Multi-aspect analysis of the current state, trends and prospects for development of scientific research is a necessary component of formation and implementation of science policy. In 1987, the expert assessment of about 300 scientific problems in the area of physics and astronomy was performed. Almost 180 leading Russian scientists, members of scientific councils of the Branch for general physics and astronomy, the USSR Academy of Sciences, who were responsible for relevant scientific problems, took part in the interview [1, 2]. For each problem, two or three experts evaluated the fundamental and applied aspects of research, their resource provision; indicated the scientific directions that are developing most fruitfully in the USSR. For each direction, the most significant achievements in the area of physics and astronomy were predicted, which can be expected in our country and abroad until 2010. The list of criteria was approved by the Branch executives. The scientific problem was evaluated upon the following criteria: B1 Fundamental importance of research, B2 Prospectivity of research, B3 Comparative level of theoretical studies, B4 Trends in the theoretical area, B5 Comparative level of experimental studies, B6 Trends in the experimental area, B7 Dependence on © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. B. Petrovsky, Group Verbal Decision Analysis, Studies in Systems, Decision and Control 451, https://doi.org/10.1007/978-3-031-16941-0_7

167

168

7 Practical Applications of Choice Methods

theoretical results from other areas, B8 Dependence on instrumentation from other areas; C 1 Applied importance of research, C 2 Potential level of research results’ use, C 3 Possible terms of research results’ implementation; D1 Available scientific groundwork, D2 Research provision by specialiststheorists, D3 Research provision by specialists-experimenters, D4 Efficiency in the use of material resources, D5 Possibilities for the material and technical support of research, D6 Possibilities for the information support of research. Options of answers-estimates were formulated for all criteria. For example, a rating scale for the criterion B2 Prospectivity of research (possibility of discoveries that change the understanding of this research area, possibility of qualitative leaps in understanding the studied objects) looked like this: b21 —within this problem, with a high degree of reliability (probability), sharp positive qualitative changes can occur, new approaches, principles and research methods can be developed, fundamentally new theories can be created, fundamentally new information can be obtained; b22 —within this problem, there are grounds (a sufficient number of “mature”, correctly formulated theoretical problems, extensive experimental material), allowing to foresee significant positive changes that can lead to creation of more general theories, improvement of principles, approaches, and methods used; b23 —within this problem, sustainable growth, accumulation and generalization of the results obtained are assumed; b24 —within this problem, no qualitative changes are expected, traditional approaches and methods will prevail. Trends in the experimental area were evaluated upon the criterion B6 as follows: b61 —the existing position will be stable; b62 —the existing advantage of the USSR will be reduced; b63 —the existing lag from the foreign level will decrease; b64 —there are tendencies for the USSR leading position; b65 —the existing lag from the foreign level will increase. The scale of the criterion D1 Available scientific groundwork was as follows: d11 —within this problem, there are a large scientific groundwork and promising theories, works have been going for a long time, experimental material has been accumulated, and so on; d12 —within this problem, there is a certain scientific groundwork, currently the main scientific theories, hypotheses, methodological approaches are being developed; d13 —within this problem, there is an insignificant scientific groundwork, currently exploratory studies are being conducted. When evaluating the problems, experts selected one of the estimates for each of the above criteria. An individual expert assessment of the scientific problem Oi given by the expert s can be represented as a multiset into the form (3.2). . ( 1) 1 ( 4) 4 Ai = k b1 ◦ b1 , . . . , k b1 ◦ b1 ; . . . ; Ai Ai

7.1 Analysis of Science Policy Options

( 1) 1 ( 3) 3 b8 ◦ b8 , . . . , k b8 ◦ b8 ; . . . k Ai Ai ( ) 1 ( ) 3 1 3 k Ai c1 ◦ c1 , . . . , k Ai c1 ◦ c1 ; . . . ; ( 1) ( 4) k c3 ◦ C31 , . . . , k c3 ◦ C34 ; . . . Ai Ai ( ) ( ) k d11 ◦ d11 , . . . , k d13 ◦ d13 ; . . . ; Ai Ai . ( ) ( ) 1 1 3 3 k Ai d6 ◦ d6 , . . . , k Ai d6 ◦ d6

169

(7.1)

over the set A = B1 ∪ . . . ∪ B7 ∪ C1 ∪ . . . ∪( C3) ∪ D1 ∪ . . . ∪ D6 of estimate grades xlel = 1 if the expert s gave an estimate on the criteria scales. The multiplicity k Ai el el el el xl = bl , cl , dl , el = 1, . . . , h l to the problem ( ) Oi , respectively, upon the criterion B1 − B7 , C1 − C3 , D1 − D6 , and k Ai xle = 0 otherwise. Collective assessment of the scientific problem Oi given by all experts is described by a sum of multisets Ai (7.1) Ai =



Ai

s

. ( ) ( )4 = k Ai b11 ◦ b11 , . . . , k Ai b14 ◦ b14 , . . . ; ( ) ( ) k Ai b81 ◦ b81 , . . . , k Ai b83 ◦ b83 ; . . . ( ) ( ) k Ai c11 ◦ c11 , . . . , k Ai c13 ◦ c13 , . . . ; , ( ) ( ) k Ai c13 ◦ c31 , . . . , k Ai c34 ◦ c34 , . . . ( ) ( ) k Ai d11 ◦ d11 , . . . , k Ai d13 ◦ d13 , . . . ; . ( 1) ( 3 )3 1 3 k Ai d6 ◦ d6 , . . . , k Ai d6 ◦ d6

(7.2)

which represent individual expert function is calculated ( ) assessments. ∑ (Multiplicity el ) x k , assuming that all experts are in accordance with a rule k Ai xlel = l ( ) s Ai equally competent. Multiplicity k Ai xlel shows a number of experts who gave an estimate xlel to the problem Oi . Collective opinion of experts, aggregated over all problems in the area of physics and astronomy, is expressed by a multiset A=

∑ i

. ( ) ( ) Ai = k A b11 ◦ b11 , . . . , k A b14 ◦ b14 ; . . . ; ( ) ( ) k A b81 ◦ b81 , . . . , k A b83 ◦ b83 ; . . . ( ) ( ) k A c11 ◦ c11 , . . . , k A c13 ◦ c13 ; . . . ; ( ) ( ) k A c31 ◦ c31 , . . . , k A c34 ◦ c34 , . . . ( ) ( ) k A d11 ◦ d11 , . . . , k A d13 ◦ d13 ; . . . ; . ( ) ( ) k A d61 ◦ d61 , . . . , k A d63 ◦ d63

(7.3)

170

7 Practical Applications of Choice Methods

Table 7.1 Total expert assessment of problems in the area of Physics and astronomy A

b11 b12 b13 b14

b21 b22 b23 b24

b31 b32 b33 b34

b41 b42 b43 b44 b45

b51 b52 b53 b54

A

28 110 30 6

50 84 42 2

34 120 22 2

120 28 14 8 8

6 76 70 26

A

b61 b62 b63 b64 b65

b71 b72 b73

b81 b82 b83

c11 c12 c13

c21 c22 c23 c24

c31 c32 c33 c34

A

84 22 26 2 44

30 138 10

104 72 2

158 14 6

66 84 16 12

122 40 8 8

A

d11 d12 d13

d21 d22 d23 d24

d31 d32 d33 d34

d41 d42 d43 d44

d51 d52 d53 d54

d61 d62 d63

A

122 54 2

56 40 66 14

14 44 84 36

12 76 60 30

4 62 96 16

90 70 18

( el ) equal to ( ael )sum of multisets Ai (7.2), multiplicity function of which k A xl = ∑ The total expert assessment of problems is shown in Table 7.1 and i k Ai xl corresponds to multiplicities of the multiset A (7.3). The interview results had the following features. For almost all criteria, with exception of the criteria B6 and D2 , distributions of expert assessments were characterized by the obviously expressed unimodality. Upon the criteria B4 , B6 , B8 , C 1 , C 2 , C 3 , D1 , D6 , the maxima of distributions are shifted towards high scores, and upon the criteria B1 , B2 , B3 , B5 , B7 , C 2 , C 4 , C 5 , the maxima were in middle scores. In accordance with opinions of most experts, on all scientific problems in general, the performed studies is undoubtedly important and necessary for solving the largest problems in physics, astronomy and/or other areas of science (score b1 2 ). Within a sufficient number of problems, there are grounds allowing to foresee significant positive changes that can lead to creation of more general theories, improvement of principles, approaches, and methods used (score b2 2 ), and also there is a large or certain scientific groundwork (score d 1 1 or d 1 2 ). The overwhelming majority of experts believed that the USSR occupies a leading position in theoretical studies on many problems, that is, the most important theoretical results in recent years were obtained for the first time in our country (score b3 1 ), and the existing position in the field of theory will be stable (score b4 1 ). Experts were less unanimous in their evaluation of experimental studies, where on many problems the USSR is either at the same level with foreign countries, or lags behind the foreign level (scores b5 2 and b5 3 ). At the same time, slightly more than half of the experts believed that the existing situation will be stable (score b6 1 ), while opinions of other experts were rather contradictory. By expert opinions, the results of theoretical studies obtained in other areas do not greatly affect the successful progress of many problems in physics and astronomy (score b7 2 ), however, new methods, devices, materials, etc., developed in other areas, are especially important and necessary (score b8 1 ). The overwhelming majority of experts, characterizing the research results, indicated that the results on many problems of physics and astronomy make an important contribution to the solution of applied problems (score c1 1 ), can be directly and fully, or to a large extent used to create new or improve existing technologies (score c2 1

7.1 Analysis of Science Policy Options

171

or c2 2 ). The results’ use in the national economy is possible both in the near future until 1990 and in the long term until 2000 (scores c3 1 and c3 2 ). When evaluating the resource provision of research, experts noted that for some problems there is a sufficient number of highly qualified specialists-theorists (score d 2 1 ), and for other problems a number of theorists is insufficient for effective studies, although their qualifications are quite large (score d 2 3 ). Availability of specialistsexperimenters for most problems is characterized as insufficient, although their qualifications provide a high level of studies (score d 3 3 ). Experts believed that efficiency of using the available material resources is limited by their strict assignment to the user (score d 4 2 ) and by a rather low level of maintenance (score d 4 3 ). Existing possibilities for the material and technical support of research are clearly insufficient, since necessary equipment, instruments, materials and the like are either available only in the form of samples (score d 5 2 ), or under development (score d 5 3 ). Experts also disagreed in evaluating possibilities for the information support of research. According to some opinions, the existing system of information support is effective, special journals are published, conferences are held, experience is exchanged, and so on (score d 6 1 ). In other opinions, the available information support is not effective enough, flow of information from abroad is limited, there are very few specialized journals, and the like (score d 6 2 ). A governing body responsible for formation and implementation of science policy, along with estimate and forecast information received from experts, must have factual data on the resource provision of research and, first of all, information on the amount of funding and human potential of the Branch institutes. Formation of science policy, performed by a governing body or decision maker, is a search for an acceptable collection of scientific problems in the multicriteria space of their estimates, which have detailed verbal formulations. The science policy option can be formulated, for example, as follows: “In the coming years, the main attention should be paid to scientific problems, which make the principal contribution to the further progress of physics, astronomy and/or other areas of science; within these problems, with a high degree of reliability (probability), sharp positive qualitative changes can occur; theoretical and experimental studies on these problems are on the same level in the USSR and abroad”. This option of science policy is specified by combination of the estimates b1 1 , b2 1 , 2 b3 , b5 2 upon the criteria B1 , B2 , B3 , B5 . Each option of science policy is associated with a certain subset of problems corresponding to the given set of estimates. From the methodological point of view, such estimates are the most adequate tool for expressing science policy. Comparison of different science policy options is an iterative procedure, at each step of which the original option of policy is corrected and consequences of this adjustment are analyzed. A decision maker can change the estimates upon criteria, for example, strengthen or weaken the previously specified estimates, exclude or add estimates upon other criteria, select the most critical criteria. Studying how changes in criteria requirements affect the final result, a decision maker can identify and analyze support factors and constraints, use additional information about the

172

7 Practical Applications of Choice Methods

expected achievements on problems, possible time of their obtaining in our country and abroad, evaluate the available financial, personnel and resource provision of research in institutes, determine the need for additional resources. The described methodological approach to analysis of science policy options was used for formation of the Program of basic research “Solid state physics” in the USSR Academy of Sciences [2], for development of forecasts on basic research progress and expected results until 2010 [3].

7.2 Evaluation of Topicality and Priority of Scientific Directions and Problems The increasing role of science in society life, significant provision allocated for research, natural resource constraints require to concentrate forces and means on the most important scientific directions and problems based on the society needs. In these conditions, definition of priorities for scientific progress, which is a key point in development of forecasts, plans, research and technical programs, is very important. Selection of the most topical problems and areas of research can be viewed as a task of ordering them in accordance with criteria reflecting the science policy. In 1980–1982, in order to evaluate topicality and priority of scientific directions and problems in the areas of medicine, interviews about 1000 members of scientific councils and problem commissions of the USSR Academy of Medical Sciences (AMS) were conducted. Directions and problems were evaluated upon many criteria, the list and content of which were agreed with the Presidium and Branches of the USSR AMS [4–7]. Criteria had ordinal scales with detailed verbal formulations of estimate grades. The expertise results were processed with the RAMPA method for collective ordering of objects by multicriteria pairwise comparisons and the sorting method. Initially, the most topical directions out of 40 directions of medical science, which are supervised by scientific councils of the USSR AMS, were identified with the help of experts in the relevant areas. When determining a direction topicality, goals of the country’s socio-economic progress and goals of the science progress were taken into account. Two experts evaluated a scientific direction upon four particular criteria. F 1 Fundamental importance of direction (FID)—contribution to solving the most important medico-biological problems; F 2 Applied importance of direction (AID)—contribution to solving the most important problems of health care; F 3 Prospectivity of direction (PRD)—possibility to achieve fundamentally new scientific results in the nearest future; F 4 Possibility of rapid implementation of scientific results (RIR) in health care. For example, an ordinal scale of the criterion F 1 Fundamental importance of the direction had the following verbal grades of estimates:

7.2 Evaluation of Topicality and Priority of Scientific Directions and Problems

173

f 1 1 —studies within this direction make a main contribution to solving basic problems of medical science—a cognition of physiological, biochemical, immunological, genetic mechanisms of vital functions of a human organism in norm and pathology; f 1 2 —studies within this direction make a significant contribution to solving basic problems of medical science; f 1 3 —studies within this direction indirectly and limitedly affect the solution to basic problems of medical science. Experts evaluated scientific directions, selecting one of the grades for each particular criterion. Individual Oi and collective Oi expert estimates of scientific direction i, given by the expert s and all experts upon criteria F 1 –F 4 , are by multisets of the type Ai (7.1) and A}i (7.2) over the set F = {represented 1 2 3 f 1 , f 1 , f 1 ; f 21 , f 22 , f 23 ; f 31 , f 32 , f 33 , f 34 ; f 41 , f 42 , f 43 , f 44 that combines the grades of criteria scales. The research direction was considered topical if it was assessed “above average” by a qualified majority of experts, that is, the total number of experts, who evaluated the direction with the first f l 1 or second f l 2 estimate upon each particular criterion F l , l = 1,…,4, exceeded the threshold value 2/3. Experts also directly ranked the selected most topical direction upon the criterion F 0 Priority of direction (PD) by setting each direction in the appropriate place. 16 topical directions, which were included in priority directions of medical science in the USSR, are present in Table 7.2. Table 7.2 Priority directions of medical science in the USSR Direction

r PD

r FD

Malignant formations

1

1.5

Cardiovascular diseases

2

1.5

Physiological, biochemical and immunological bases of vital functions of a human 3 organism

3.5

Viral diseases

4

3.5

Molecular biology and genetics

5

5

Maternal and child health care

6

6

Endocrine diseases

7

8

Nervous and mental diseases

8

9

Traumatology and orthopedics, ambulance

9

13

Hygienic aspects of environmental protection

10

7

Physiologically active substances

11

10

Balanced nutrition

12

12

Reconstructive surgery

13

14

Transplantology, creation of artificial organs

14

15.5

Occupational diseases, improvement of working conditions

15

11

Gerontology and geriatrics

16

15.5

174

7 Practical Applications of Choice Methods

Considering all experts to be equally competent, an importance of criteria for gr all experts to be equal, and using the RAMPA method, group rankings R Fl of 16 priority directions for all particular criteria F 1 –F 4 and the general collective ranking

R gr F D , that combines particular rankings, were built. Individual expert rankings R P D agg of directions by priority were aggregated into the collective ranking R P D with the Borda procedure. The final results of expert evaluation of the scientific direction priority are shown in Table 7.2, where ranks r PD and r FD of directions in the rankings gr R agg P D and R F D are indicated. Analysis of expert interview results revealed some features of expert preferences. Thus, rankings of scientific directions, built upon different criteria, were heterogeneous. Rankings upon the criteria PD, FID, AID have the obvious “head” and “tail” directions. In the ranking upon the criterion RIR, these groups are quite close to each other. In the ranking upon the criterion PRD, it was not possible to clearly distinguish the “head” and “tail” groups. Collective rankings of scientific directions obtained by different methods are practically the same (the Spearman coefficient of rank correlation ρ = 0.93). The first six agg gr directions, which form “heads” of the rankings R P D and R F D , can be considered as the highest priority in medical science. Differences in the order of other directions are minor. There is a relatively small, but statistically significant agreement of expert estimates for all criteria, except for the criterion PRD (the Kendall concordance coefficient ω = 0.145–0.151 < 1). The most interesting fact seems to be that a consistency of preferences upon the criterion PD was much better (by the ω value) than upon particular, more “specific” criteria. When several experts, whose individual rankings deviated noticeably from the general opinion, were excluded, the final rankings of directions upon most of the particular criteria did not change significantly. At the same time, upon the criterion PRD, statistically significant agreement (at the significance level α = 0.05) was not observed even after excluding the third part of experts. The study of correlations between rankings upon various criteria made it possible to emphasize two particular criteria FID and AID, which most significantly characterize expert preferences. The corresponding rankings practically do not correlate (coefficient ρ = 0.03). Also, the criteria FID and PRD are relatively weakly related (ρ = 0.19). The criterion AID strongly correlates with the criteria RIR (ρ = 0.85) and PRD (ρ = 0.73). Analysis of the links between the criterion PD and particular criteria is highly interested. Only the correlation between PD and FID was significant (ρ = 0.87). Other rank correlation coefficients were insignificant even at the significance level α = 0.05. This result is unexpected if we consider medical science as a predominantly applied science. Each direction of medical science covers studies performed on individual scientific problems, the total number of which exceeds 200. In expert interviews to evaluate priority of scientific problems related to a certain scientific direction, the above approaches were modified.

7.2 Evaluation of Topicality and Priority of Scientific Directions and Problems

175

Scientific problem was evaluated upon one general and seven particular criteria. For each criterion, scales with detailed verbal formulations of estimate gradations were proposed. The general criterion G0 Priority of problem had the following scale: g01 —a problem is the highest priority; g02 —a problem is among the highest priority; g03 —a problem has a significant priority, although it is not among the highest priorities; g04 —a problem priority is insignificant; g05 —a problem is the lowest priority. The particular criteria for evaluating a scientific problem were: G1 Problem contribution to solving problems of public health protection (CHP); G2 Fundamental importance of problem (FIP); G3 Applied importance of problem (AIP); G4 Prospectivity of problem (PRP); G5 Complexity—influence of the results obtained in solving this problem on development of other problems (COP); G6 Level of progress of theoretical studies on a problem in comparison with a foreign level (LTP); G7 Level of progress of experimental studies on a problem in comparison with a foreign level (LEP). So, a scale of the criterion G1 Problem contribution to solving problems of public health protection was as follows: g1 1 —studies within this problem make a main contribution to solving the most important problems of public health protection aimed at reducing general and occupational morbidity, disability and mortality; g1 2 —studies within this problem make a significant contribution to solving some of the most important problems of public health protection; g1 3 —studies within this problem do not make a significant contribution to solving the most important problems of public health protection. The results of individual and collective estimates of the problem Oi upon the by multisets of general criterion G0 and particular criteria G1 –G7 were } { represented 1 2 3 4 5

(7.1) and A (7.2) over the sets G = g , g , g , g , g and G the type A i i 0 { 1 2 3 1 2 3 1 2 3 1 2 3 4 1 2 0 3 0 1 02 03 04 1 2 = g1 , g1}, g1 ; g2 , g2 , g2 ; g3 , g3 , g3 ; g4 , g4 , g4 , g4 ; g5 , g5 , g5 ; g6 ,g6 , g6 , g6 ; g7 ; g7 , g73 ; g74 , which combine the grades of criteria scales. In addition to evaluating problems upon general and specific criteria, experts also determined their relative priority. In matrices of paired comparisons of problems, their preference, equivalence and incomparability were allowed. The interview results were processed using the RAMPA and sorting methods. Distributions of expert estimates on problems were checked for possible inconsistency of expert opinions. A presence of cyclical triads of estimates in individual matrices of paired comparisons was revealed. A large number of triads were interpreted as evidence of inconsistency in the expert judgments. These experts were either questioned repeatedly, or their information was completely deleted from further consideration. When comparing problems by relative priority, in about 8% of cases, the null hypothesis on a uniform distribution of individual rankings could not be

176 Table 7.3 Priority scientific problems in the field of ophthalmology

7 Practical Applications of Choice Methods Problem

r PP

Criteria estimates

r GP

Myopia

1

p1 p2 p3 p4 p5

Glaucoma

2

p1 2 , p2 2 , p3 2 , p4 2 , p5 2

Eye vascular diseases

3

p1 2 , p2 2 , p3 2 , p4 2 , p5 2

2.5

Diseases of eye optical media

4

p1 2 , p2 3 , p3 2 , p4 2 , p5 2

4

Damages to vision organ

5

p1 2 , p2 3 , p3 2 , p4 3 , p5 2

5

1,

2,

2,

2,

2

1 2.5

rejected at the significance level α = 0.05. Incomparable combinations of estimates were analyzed by chairmen of the scientific councils of USSR AMS supervised the evaluated problems, who acted as super-decision makers. Further, a consistency of expert opinions was checked, and the problems were ranked. With a satisfactory consistency of individual estimates, considering all experts to be equally competent, an importance of criteria for all experts to be equal, gr we built collective orderings R Gl , l = 0,…,5 of problems upon the general critegr rion G0 and particular criteria G1 –G5 ; collective ordering R G P of problems that gr agg combines rankings R Gl ; collective ordering R P P that aggregates individual expert rankings R P P of problems by relative priority. The results of expert evaluation of scientific problem priority in the field of ophthalmology are shown as an illustration agg gr in Table 7.3, where ranks r PP and r GP of problems in the rankings R P P and R G P , estimates upon the criteria G1 –G5 are indicated. For a significant number of scientific directions, rankings of problems obtained by the methods of sorting and pairwise comparisons were quite close. This made it possible to distinguish groups of leaders among the problems that are the priority problems. An agreement of the expert estimates was satisfactory. The Spearman coefficient ρ of rank correlation between the correspondent rankings fluctuated between 0.73 and 1.0, averaging 0.88. Subgroups of the most consistent expert opinions were also identified, and a stability of the final orderings in relation to this procedure was checked. Judging by quantity and quality of errors that experts made when filling out expertise tables, the sorting method is more convenient to use than the pairwise comparison method. This may be due to both the greater simplicity of filling out tables, and the fact that it is easier to select estimates upon particular criteria than to compare all problems in pairs by their relative priority. On the other hand, a resolution ability of the sorting method is clearly lower than that of the pairwise comparison method. This is due to the fact that here the number of possible estimates is small (from 3 to 5), and a number of estimates actually chosen by experts is even smaller. Multicriteria expert assessments make it possible to identify problems that meet certain requirements for individual criteria or estimation combinations, and to perform additional analysis of these problems. In particular, a comparative level of development of medical research in the USSR and abroad in theoretical and experimental aspects can be characterized by estimates upon the criteria LTP, LEP.

7.3 Formation of Scientific and Technological Program

177

The results of priority analysis of scientific directions and problems were used in the USSR Academy of Medical Sciences when developing long-term research plans, forecasts of medical science progress within the framework of the Complex Program of Scientific and Technological Progress of the USSR for 1986–2005 [4].

7.3 Formation of Scientific and Technological Program Selection and evaluation of proposals are important elements of decision-making processes, for example, in formation and implementation of the state scientific and technical policy, choice of the progress strategy for a large industrial company that develops and manufactures high-tech products and advanced technologies. It is necessary not only to compare the proposals received, but, taking into account economic, production, market conditions and other factors, to determine and form the most promising areas of activity, ensuring the growth of business activity and high efficiency of results. Competitive selection of research projects is usually performed by special organizational structures. For example, in the USSR State Committee for Science and Technology, the USSR Academy of Sciences, and the Russian Academy of Sciences, this function was performed by scientific councils and competition commissions, which included leading scientists of the country representing various areas of knowledge. The selected applications must meet certain requirements for their qualities, which are specified by the competition organizer, express its policy and reflect the competition peculiarities. Competition commissions select projects based on recommendations of experts and specialists in the relevant areas of science. Members of the competition commission can have a fairly large impact on the final results, although in some cases an opinion of the head of the commission is decisive. Scientists, both participating in the competition and conducting examination of applications, may have various interests, including opposite ones, which must be taken into account in the competitive selection of proposals. To increase reliability of information received from experts, it is advisable to evaluate competitive applications in a professional language familiar to experts. For this, qualitative criteria with ordinal or nominal scales, which have detailed formulations of grades, are most suitable. A collection of such verbal estimates characterizes different “qualities” of the project, expressed by attributes. Opinions and recommendations of experts are individual rules for sorting projects by their properties. To accept or reject an application, it is necessary to construct an aggregated decision rule that generalizes sorting rules of individual experts and is described in the natural language of verbal gradations of criteria scales. In 1987–1988, the State scientific and technical program on high-temperature superconductivity was formed on the basis of competitive selection of proposals under the guidance of the Inter-agency scientific council headed by the President of the USSR Academy of Sciences [8–12]. In order to divide applications for

178

7 Practical Applications of Choice Methods

research and developments into groups corresponding to the Program goals, the Council formed competition commissions for all sections of the Program. Competition commissions examined and preliminary selected projects based on expert estimates upon criteria approved by the Council, and expert recommendations. The Council made a final decision on the project inclusion in the Program and its funding. Three experts evaluated each application for participation in the Program upon the following substantive criteria with verbal scales: Q1 Project importance for the Program; Q2 Prospectivity of project; Q3 Novelty of the approach to solving tasks; Q4 Qualification of the project team; Q5 Resource provision of works; Q6 Possibility of rapid realization of results in practice. For example, a scale of the criterion Q1 Project importance for the Program had the following gradations: q11 —a project ensures achieving one of the main Program goals; q12 —a project contributes to achieving one of the Program goals; q13 —a project is indirectly related to the Program goals. A scale of the criterion Q6 Possibility of rapid realization of results in practice looked as follows: q61 —results will have a sufficiently high degree of manufacturability, ensuring their rapid use in practice; q62 —additional research and developments will be required to use the planned results in practice; q63 —a work is mostly theoretical. Each expert, together with an application evaluation for all substantive criteria, gave one of the following recommendations: r 1 —include a project in the Program; r 2 —reject a project; r 3 —send a project for revision and consider later. Experts examined applications independently of each other without coordinating their opinions. Experts’ recommendations were essentially individual rules for the preliminary classification (sorting) of applications under consideration, which may differ. Based on expert conclusions, a competition commission, after a joint discussion of applications, made recommendations on the project inclusion in a relevant section of the Program and an amount of funds required for its implementation. Each competition commission made such decision independently, guided by its own considerations about the need to accept or reject an application. When several experts evaluate objects upon many qualitative criteria with verbal scales, aggregation of multicriteria estimates and individual decision rules presents significant methodological difficulties. These difficulties are due to a non-numerical nature of the data describing objects. Therefore, in practice, when processing the results of expertise, various heuristic procedures were used, followed by analysis of the solutions obtained.

7.3 Formation of Scientific and Technological Program

179

During the preliminary selection of projects for their inclusion in the Program, competition commissions first of all excluded projects that had the worst marks upon substantive criteria. For example, the share of high importance projects (score q1 1 ) was increased from 22 to 26%, the share of less significant projects (score q1 2 ) decreased from 72 to 71%, and the share of projects that are indirectly related to the Program goals (score q1 3 ) decreased from 6 to 3% (twice!). After the preliminary selection of projects by competition commissions, a database of expert assessments and recommendations was formed, with which the deputy chairman of the Inter-agency scientific council on the Program worked. The choice was made using a fairly simple, flexible and easily interpreted method for constructing cutting hyperplanes in a multidimensional criterion space. The leader formed different decision rules (options of science policy), specifying certain restrictions on values of criteria estimates. Receiving lists of projects that satisfy the given restrictions, the leader relatively quickly found solutions that, in his opinion, most coincide with the recommendations of competition commissions. As a result, a decision rule was formulated, which reflected rather well the recommendations of all competition commissions [8, 13]. This rule was as follows: “A project included in the Program must either be of high importance to achieving one of the main Program goals, or contribute to achieving one of the Program goals; the project team must be one of the best research teams or their experience and qualifications must be at the level sufficient to perform works; the project team must have at least the main part of material and technical resources” (scores: q1 1 or q1 2 ; AND q4 1 or q4 2 ; AND q5 2 ). More than 170 projects out of almost 260 ones submitted for the competition were recommended for inclusion in the Program. Among the pre-selected projects, only four did not fully satisfy the above requirements. At the same time, projects were identified, which were rejected by commissions but satisfying the mentioned decisive rule. Discrepancies were discussed by members of competition commissions and the Inter-agency scientific council that approved the final decision on the Program composition and amounts of funding for projects. Later, using the MASKA method, a generalized rule of the form “IF …, THEN …” for collective classification of objects present in several versions, which aggregates individual decision rules, was constructed. This method was tested on an array of expert assessments obtained previously during formation of the State scientific and technical program on high-temperature superconductivity. So, there are 259 applications evaluated by three experts upon six criteria Q1 – Q6 with verbal grading scales. All experts are considered equally competent, an importance of criteria for all experts is equal, and an expert recommends either to accept a project (include in the class Da ), or to reject a project or to consider a project later (include in the class Db ). It is required, taking into account individual recommendations of experts, to construct collective rules for assigning applications to one of the following classes: Da \Dac of unconditionally supported (preferable) projects, Db \Dbc of unconditionally rejected (non-preferable) projects, Dc = Dac ∪ Dbc of contradictory classified projects.

180

7 Practical Applications of Choice Methods

Let present individual and collective estimates of the project Oi upon the criteria Q1 –Q6 and recommendations of experts as multisets Ai (4.10) and Ai (4.11) over the set Q ' = Q 1 ∪ . . . ∪ Q 6 ∪ R that combines the criteria Q 1 − Q 6 and the sorting attribute R = {ra , rb }, where r a is to accept a project, r b is to reject ( el )or put ql = 1 aside a project. Recall that in these multisets multiplicity function k Ai el if the expert s has estimated a project O with a score q , e = 1, . . . , h the l l (upon l ( )i el ) el q = 0 q Q , l = 1, . . . , 6, and k otherwise. Multiplicities k and criterion l Ai l l Ai ( ) k Ai r j show how many experts estimated a project Oi with a score qlel , and gave a recommendation r a or r b . Multisets Ai and Ai also express the individual and collective decision rules for sorting projects into the classes Da and Db , depending on scores upon the criteria Q1 –Q6 . Below are the data illustrating the array of|| expert || estimates of projects. Table 7.4 presents a part of the decision table H' = ||ki j ||259×(20+2) that characterizes the projects O1 –O259 submitted for the competition. Rows of Table 7.4 are multisets Ai (7.2), which are sums of the multisets Ai (7.1). Elements k ij in the columns qlel , l = 1, . . . , 6 are numbers of experts who evaluated the project upon meaningful attributes, elements k ij in the columns r a , r b are those who assigned the project to one of the classes Da , Db . Projects O1 –O175 are included in the class Da of supported projects, projects O176 –O259 are included in the class Db of rejected projects. Note to a reader that the projects O175 and O176 have the same scores upon all criteria, but are included in different classes due to difference of individual rules. || sorting || || ' || Table 7.5 presents the aggregated decision table L = ||ki j || that char2×(20+2)

acterizes the classes Da and Db constructed according to the following rule of votes’ majority: “A project Oi is included in the class Da of supported projects if ki j (ra ) ≥ ki j (rb ); otherwise, a project is included in the class Db of rejected projects”. Rows of Table 7.5 are the multisets C a , C b , which are sums of the multisets Ai describing projects included in the classes Da , Db . Elements k’ij are sums of the corresponding elements k ij from columns of Table 7.4. Table 7.4 Decision table (expert estimates of competitive applications) O\Q'

q1 1 q1 2 q1 3

q2 1 q2 2 q2 3

q3 1 q3 2 q3 3

q4 1 q4 2 q4 3 q4 4

q5 1 q5 2 q5 3 q5 4

q6 1 q6 2 q6 3

ra rb

A1 …

1

2

0

2

1

0

3

0

0

2

1

0

0

0

2

1

0

2

1

0

3

0

A175

1

1

1

0

2

1

1

2

0

0

2

1

0

0

1

2

0

0

0

3

2

1

A176 …

1

1

1

0

2

1

1

2

0

0

2

1

0

0

1

2

0

0

0

3

1

2

A259

0

2

1

0

1

2

0

3

0

0

1

1

1

0

0

2

1

0

3

0

0

3

Table 7.5 Aggregated decision table (expert estimates of application classes) D\Q'

q1 1 q1 2 q1 3

q2 1 q2 2 q2 3

q3 1 q3 2 q3 3

Ca

144 360 21 81 324 120

99 336

Cb

45 156

36 111 105

51

27 93 132

90

q4 1 q4 2 q4 3 q4 4 219 297 9

0

51 132 63 6

q5 1 q5 2 q5 3 q5 4

q6 1 q6 2 q6 3

ra rb

72 435 18 0

126 300 99

510 15

60 147 30 15

45 135 72

78 174

7.3 Formation of Scientific and Technological Program

181

Table 7.6 Distances between meaningful and categorical multisets, degrees of approximation of classifying attributes d

333

297

303

393

327

273

591

Vl

0.563

0.503

0.517

0.665

0.553

0.462

1.000

) ( ∗ ∗ Table 7.6 gives the maximum distances d Q la , Q lb and d(Ra , Rb ) between ∗ ∗ ∗ , Q lb and categorical multisets Ra , Rb in the metric space meaningful multisets Q la (B , d) of multisets. Subsets included || in|| meaningful and categorical multisets form rows of the inverted matrix L−1 = ||k ji ||(20+2)×2 , columns of which are multisets C a ∗ ∗ and C b from Table 7.5. The multisets Q la , Q lb that are found from the solution of l optimization problems (4.16) consist of classifying attributes specifying the best decomposition ( ∗ of )projects into the classes Da , Db . Values of approximation degree ∗ Vl = d Q la , Q lb /d( Ra , Rb ), which determines a significance of each l-th group of classifying attributes are also indicated. Ranking of classifying attributes by significance shows that the criterion Q4 , which characterizes experience and qualification of project team, is the most important criterion for the project selection, the criterion Q1 , which estimates the project importance for achieving the Program goals, and the criterion Q5 , which reflects the resource provision of works, are the next in importance. Let build the generalized group decision rules for sorting projects. The classifying } { attributes Q a∗ = q41 , q42 ; q11 , q12 ; q51 , q52 ; q31 , q32 ; q21 , q22 ; q61 , q62 , which are ordered by are characteristic } for the class Da , and classifying attributes Q b ∗ = { 3significance, q4 , q44 ; q13 ; q53 , q54 ; q33 ; q23 ; q63 are characteristic for the class Db . Choosing some desired level of approximation degree, we obtain the following generalized group decision rules for the selection of projects in natural language. “If the project team is among the best research teams or has experience and qualification sufficient to perform works, then it is recommended to support the project” (scores: q4 1 or q4 2 ; approximation degree Vl ≥ 0.66). “If the project ensures achieving one of the main Program goals or contributes to achieving one of the Program goals; the project team is among the best research teams or has experience, qualification, material and technical resources sufficient to perform works, then it is recommended to support the project” (scores: q4 1 or q4 2 ; AND q1 1 or q1 2 ; AND q5 1 or q5 2 ; approximation degree Vl ≥ 0.55). It is remarkably that the last rule almost completely coincides with the decision rule for project inclusion in the Program, previously found empirically by the deputy chairman of the Inter-agency scientific council on the Program. Similarly, the generalized group decision rules for rejecting projects are found. One can also find the rules applied by some of competition commissions, and compare them with the generalized group decision rules for the Program as a whole. Improve the generalized group sorting rules to identify consistently and inconsistently classified projects. For this, consider sequentially all classifying attributes by one, two, three, and so on, and determine such combinations of attributes that

182

7 Practical Applications of Choice Methods

provide the maximum difference between numbers of correctly and incorrectly classified projects. These attributes form the improved group rules for sorting projects into classes: Da \Dac of unconditionally supported projects, Db \Dbc of unconditionally rejected projects, Dc = Dac ∪ Dbc of contradictory classified projects [14]. Table 7.7 shows combinations of the classifying attributes included in the improved group sorting rules that allow finding the class Da \Dac of unconditionally supported projects. Here N a is a number of correctly classified objects, Nac is a number of incorrectly classified objects, attributes, which provide the greatest difference Na − Nac , are written in bold. The improved group rule to determine unconditionally supported projects is written in natural language as follows: “If the project ensures achieving one of the main Program goals or contributes to achieving one of the Program goals; has high or sufficient prospectivity; offers completely new or modernized approaches to solving the assigned tasks; the project team is among the best research teams or has experience and qualification sufficient to perform works, then it is recommended to unconditionally support the project” (scores: q1 1 or q1 2 ; AND q2 1 or q2 2 ; AND q3 1 or q3 2 ; AND q4 1 or q4 2 ). Table 7.7 Attribute combinations for including a project into the class Da \Dac of unconditionally supported projects Attributes

Na

Nac

Na − Nac

q1 1 , q1 2

173

55

118

q2 1 , q2 2

172

24

148

q3 1 , q3 2

172

27

145

q4 1 , q4 2

174

57

117

1,

2

174

62

112

q6 1 , q6 2

172

64

108

q5 q5

q2 1 , q2 2 and q1 1 , q1 2

171

20

151

q2 1 , q2 2 and q3 1 , q3 2

170

8

162

q2 1 , q2 2 and q4 1 , q4 2

172

21

151

2

172

23

149

q2 1 , q2 2 and q6 1 , q6 2

171

38

133

169

4

165

q2

1,

2

1,

q2 and q5 q5

q2 1 , q2 2 and q3 1 , q3 2 and q1 1 , q1 2 2

170

8

162

q2 1 , q2 2 and q3 1 , q3 2 and q6 1 , q5 2

170

7

163

171

13

158

q2

1,

2

1,

2

1,

q2 and q3 q3 and q4 q4

q2 1 , q2 2 and q3 1 , q3 2 and q6 1 , q6 2 2

169

3

166

q2 1 , q2 2 and q3 1 , q3 2 and q1 1 , q1 2 and q5 1 , q5 2

169

4

165

170

9

161

q2

1,

2

1,

2

1,

2

q2 and q3 q3 and q1 q1 and q4

1,

q4

q2 1 , q2 2 and q3 1 , q3 2 and q1 1 , q1 2 and q6 1 , q6 2 2

169

4

165

q2 1 , q2 2 and q3 1 , q3 2 and q1 1 , q1 2 and q4 1 , q4 2 and q6 1 , q6 2

170

6

164

q2

1,

2

1,

2

1,

2

q2 and q3 q3 and q1 q1 and q4

1,

2

1,

q4 and q5 q5

7.3 Formation of Scientific and Technological Program

183

Similarly, build the improved group rule to determine unconditionally rejected projects that aggregates individual sorting rules. The improved classes Da \Dac of unconditionally supported projects, Db \Dbc of unconditionally rejected projects, Dac and Dbc of contradictory classified projects, which were formed upon the improved group rules, include the following projects: class Da Dac : projects O1 − O15 , O17 − O58 , O60 − O174 ; class Dac : projects O16 , O59 , O175 ; class Db \Dbc : projectsO176 − O184 , O186 − O244 , O246 − O259 ; classDbc : projects O185 , O245 ; Obtained distributions of projects by classes almost completely coincided with the recommendations of experts and competition commissions. The methodological principles for selection of multicriteria alternatives on a competitive basis, that were proposed for formation of the State scientific and technical program on high-temperature superconductivity, were also used in formation of the State scientific and technical program “Perspective information technologies”, programs of basic research of the USSR Academy of Sciences [9, 15–17]. Collection of meaningful criteria and sorting attributes for different programs may differ. Thus, in the programs of the USSR Academy of Sciences, the criteria Clarity of the scientific problem formulation and Scientific groundwork available to the project team were used to evaluate projects instead of the criterion Possibility of practical use of results. This approach has demonstrated its reliability and efficiency in evaluation of the borrower creditworthiness, which is known as the problem of credit scoring. Every year banks and credit organizations have significant losses due to non-return of funds. In existing databases, there were accumulated huge amounts of various personal information (gender, age, marital status, residence place, education, occupation, etc.) and financial indicators (income and expenditure, delivery and recovery of credits, payments for purchases and services, account balances, cash withdrawals, and the like). Analysis of such data makes it possible to evaluate a possible solvency of borrowers. To solve the problem of credit scoring, linear and logistic regression, linear programming, decision trees, neural networks, and other methods are used [18]. The main disadvantage of these methods and software systems is that both primary data characterizing borrowers and issued recommendations for lending to borrowers are described by numerical indicators. This leads to mistakes when deciding whether to extend credit to a person or refuse to provide a loan. Representation of objects with multisets of qualitative attributes and use of the MASKA method help to overcome the noted drawbacks [19, 20]. Group decision rules allow to explain the resulting classifications of objects, to identify discrepancies in individual sorting rules, and to argue earnestly for decisions made.

184

7 Practical Applications of Choice Methods

7.4 Project Competition in Scientific Foundation Foundations that finance research are important elements of the science organization in any country with high scientific potential. Scientific foundations play the role of coordinating centers of national scientific communities, through which scientists have opportunity to receive a support for their work. Key principles of the foundation activity: open competitions on the announced topics; independent expertise of works as the main way to evaluate projects; publication of the results obtained in prestigious scientific journals. The main tool for expertise of competitive applications and results obtained is reviewing works. This mechanism is widely used in public and private organizations that provide grants for scientific research, in particular, in the Russian Foundation for Basic Research (RFBR), the Russian Science Foundation (RSF), the USA National Science Foundation (NSF), and other foundations [21, 22]. Each fund has own procedures for holding competitions and organizing peer review, own rules and criteria for selection and evaluation of projects. For instance, in the RFBR, the expertise of projects consists of several stages and combines individual assessments of experts in the relevant area of knowledge and collective discussion of expert conclusions at the Expert Board of the Foundation. Experts evaluate a content of application (at the initial stage of project selection), obtained results (at the intermediate stage of project implementation), final results (at the final stage of project completion). In different kinds of competitions, different collections of criteria for evaluating applications and reports are used, but all criteria have scales with detailed verbal formulations of quality gradations. This approach makes it possible to operate with estimates, which are unified, to a certain extent, for representatives of different knowledge areas, and to obtain more reliable information from experts. Initially, every project is independently reviewed by several experts, usually two or three. Each expert gives a reasoned multicriteria assessment of research quality, as well as a recommendation to support a work, choosing for every criterion only one of the available gradations. The Expert Board in each knowledge area considers the recommendations of experts, their assessments of applications and reports, gives judgments on a support of new projects, continuation or termination of previously approved projects, amounts of project funding. On the basis of conclusions of Expert Boards in all areas of knowledge, the RFBR Council made the final decision on support of projects and distribution of the funds. To increase validity of decisions on acceptance or rejection of projects, it is very useful to analyze the reviews’ results that present in generalized and concentrated form opinions of many experts, including contradictory ones. In the RFBR competition for goal-oriented basic research performed in interests of the Russia Federal agencies and departments, experts evaluated applications upon 11 qualitative criteria [21, 23, 24]. The group ‘Scientific characteristics of project’ included 9 criteria: P1 Fundamental level of project, P2 Orientation of results, P3 Goals of project, P4 Methods for achieving the project goal, P5 Character of research,

7.4 Project Competition in Scientific Foundation

185

P6 Scientific significance of project, P7 Novelty of the proposed solutions, P8 Potential of the project team, P9 Technical equipment. The group ‘Assessment of possibilities for the practical implementation of project’ consisted of two criteria: P10 Final stage of basic research proposed in project, P11 Scopes of study results’ applicability. Each criterion, excluding the criterion P2 , had ordinal or nominal rating scale with detailed verbal formulations of quality gradations. For instance, a scale of the criterion P7 Novelty of the proposed solutions was as follows: p7 1 —solutions are formulated originally and exceed significantly the level of existing ones; p7 2 —solutions are at the level of existing ones; p7 3 —solutions are inferior to some existing ones. Additional analysis showed that a scale of the criterion P2 combines, in fact, two criteria that characterize the results’ orientation on the progress of new technologies and their implementation in various Federal agencies (industries). For this reason, the criterion P2 was excluded from further consideration. Each expert evaluated a competition application according to the above criteria, and also gave his/her conclusion on feasibility of the project support, setting one of the following marks: r1 —unconditional support (score ‘5’), r2 —advisable support (score ‘4’), r3 —possible support (score ‘3’), r4 —no support (score ‘2’). Note that the given numerical estimates are only symbols, not numbers, with which arithmetic operations are performed. Multicriteria assessments of project and the expert recommendation represent together his/her individual rule for accepting or rejecting a project. The proposed approach to analyze results of the expertise of goal-oriented basic research in the RFBR 2006 competition was tested in the following knowledge areas: Physics and astronomy—totally 127 projects were submitted for the competition, of which 39 ones were supported and 88 ones were rejected; Biology and medical science—totally 252 projects were submitted for the competition, of which 68 ones were supported and 184 ones were rejected. For each of knowledge areas, using the MASKA method, group decision rules for application selection were built, based on project evaluations and expert recommendations. Projects were specified as multisets Ai (4.10) over the set P’ = P ' = P1 ∪ P3 ∪ . . . ∪ P11 ∪ R of estimate grades upon scales of criteria and conclusion. All experts were considered to be equally competent, and an importance of criteria was equal for all experts. The following rules of votes’ majority were accepted to combine the individual sorting rules: “A project is one of the unconditionally supported ones, when, in conclusion, all ( )

experts marked the score ‘5’ k Ai (r 1 ) = 1, k Ai (r 2 ) = k Ai (r 3 ) = k Ai (r 4 ) = 0 , or a number of scores ‘5’ was greater than or equal to a number of score ‘4’ as uncondition(k Ai (r1 ) ≥ k Ai (r2 ) +k Ai (r3 ) + k Ai (r4 )). The project is categorized ( ) ally rejected when none of experts marked the score ‘5’ k Ai (r 1 ) = 0 ”.

186

7 Practical Applications of Choice Methods

Collections of classifying attributes, which determine the belonging of a project to the classes Da of supported projects and Db of rejected projects, were the same Pa∗ = knowledge areas (in decreasing of their relative significance): } ∗ } {in both {order 1 2 1 2 1 2 3 3 3 3 4 1 2 p6 , p6 , p11 , p11 ; p10 , p10 ; p5 , Pb = p6 ; p11 ; p10 , p10 ; p5 , p5 . The maximum difference between numbers of correctly and incorrectly classified projects for assigning a project to the improved class ( 1 ) Da \D(ac 2was ) provided ( 3 )by the following p + k p > k , P − k rules: for the criteria P 6 11 Ai Ai Ai ( 1) ( 2 )l ( 3l ) ( 4pl) , l = 6, 11, for p + k p > k p + k p10 , for the criterion P − k the criterion 10 Ai Ai Ai Ai 10 ( ) ( ) 10 ( ) 10 P5 − k Ai p51 + k Ai p52 < k Ai p53 . The rules for assigning projects to the improved class Db \Dbc were obtained by replacing the signs with the opposite ones. The most significant for the unconditional support of projects were estimates upon the following criteria: P6 , P11 for Physics and astronomy; P6 , P11 , P10 for Biology and medical science. In fact, combinations of expert assessments only upon these criteria determined projects, which were included in the appropriative decision class, or required additional consideration due to a discrepancy between individual expert conclusions. Taking these factors into account provided a significant reduction in time required to find classifying attributes and sort projects. The improved group rule for project acceptance, written in natural language, looked like this: “If a project has exceptionally high or great scientific significance (scores p6 1 or p6 2 ); mass or interdisciplinary scopes of study results’ applicability (scores p11 1 or p11 2 ); research proposed in a project are completed with a laboratory sample or key elements of development (scores p10 1 or p10 2 ), then this project should be unconditionally supported”. The constructed rules for project classification made it possible to determine the most significant criteria that have a decisive influence on project selection, to evaluate quality and consistency of expert assessments, and to identify existing discrepancies in expert opinions. Analysis of the expertise results showed that not all experts are sufficiently attentive and accurate when reviewing projects. Thus, a level of consistency of expert assessments upon many criteria and individual conclusions on project support was low. Quite a lot of projects were identified that require additional analysis due to discrepancies between content evaluations and expert conclusions: in Physics and astronomy—33 projects or 26% of the total number of competition applications in the decision table, including 15 projects at the preliminary stage of competition and 18 projects based on the results of algorithms’ work; in Biology and medical science—84 projects or 33% of the total number of competition applications in the decision table, including 44 projects at the preliminary stage of competition and 40 projects based on the results of algorithms’ work. In the competition of the Russian Humanitarian Science Foundation (RHSF) for initiative basic research, experts evaluated applications upon 20 qualitative criteria. These criteria characterized the scientific level, implementation potential, scientific qualifications of the team, and project funding [21, 25]. The first group ‘Assessment of a project scientific level’ included 7 criteria: S 11 Fundamentality of study, S 12 Scientific significance of expected study results, S 13

7.4 Project Competition in Scientific Foundation

187

Topicality of scientific research problem, S 14 Complexity of study, S 15 Scientific novelty of study, S 16 State of the art on project problem—the main research directions in world science, S 17 Correspondence of project title to scientific research problem. The second group ‘Assessment of a project implementation potential’ consisted of 7 criteria: S 21 . Adequacy of research methods and tools used, S 22 Novelty of study methodological tools, S 23 Adequacy of information and other resources to study goals, S 24 General plan of work, S 25 Clarity of presentation and logical interconnection of goals, tasks, research methods, general plan of work and expected results, S 26 Potential possibilities of using research results in solving applied problems, S 27 Presentation form of project results. The third group ‘Assessment of a project team scientific qualification’ included 5 criteria: S 31 Qualification of the project leader, S 32 Qualification of the project team, S 33 Scientific groundwork for project, S 34 Participation of foreign researchers, S 35 Ages of researchers. The fourth group ‘Assessment of project financing’ consisted of one criterion: S 41 Reasonableness of declared amount of funding. Each criterion had an ordinal or nominal scale of verbal estimates with two or three grades of quality, excluding the criterion S 27 , which had a scale with seven grades. For example, a scale of the criterion S 11 Fundamentality of study looked like this: 1 s11 —study is aimed at searching general regularities (nature, structure and mechanisms) of phenomena, processes or objects, determining relationships between them; 2 s11 —study is mainly descriptive without determining relationships between phenomena; 3 s11 —study is not basic. The criterion S 12 Scientific significance of expected study results had the following scale: 1 s12 —results can qualitatively change modern understanding a nature, structure and consistent patterns of phenomena (objects) studied in the given field of science; 2 s12 —results can contribute to enhancing the existing knowledge about phenomena (objects) studied in the given field of science, and their interconnections. A scale of the criterion S 32 Qualification of the project team is: 1 s32 —qualification of the project team are sufficient to accomplish totally the stated research tasks: there are scientific grants, papers in leading peer-reviewed scientific journals included in the citation systems (SSCI, AHCI, RSCI, etc.), and so on; 2 s32 —provided information does not allow to evaluate a qualification of the project team, sufficiency for implementation of the stated research tasks, or there are no data. Each expert also recommended to support a project (r 1 —yes) or not to support a project (r 2 —no). The results of the RHSF 2013 competition were analyzed using several methods of group decision making. Additional analysis showed that the criteria S 27 , S 34 , S 35 and S 41 are rather informational in nature and reflect individual features of projects that are needed only when sorting projects. For this reason, these criteria were excluded from further consideration.

188

7 Practical Applications of Choice Methods

In the scientific direction on General psychology, history and methods of psychology, 39 projects were submitted, of which 10 ones were supported and 29 ones were rejected. Multicriteria estimates of applications, given by three experts, were written as multisets Ai (7.2) over the set S = S11 ∪ . . . ∪ S17 ∪ S21 ∪ . . . ∪ S26 ∪ S31 ∪ . . . ∪ S33 of grades of criteria scales. Projects were ordered in four different ways. The results of processing expert assessments are presented in Table 7.8. The first ranking of projects is built using the ARAMIS method for collective ordering of multi-attribute objects. All projects are ordered by value of the indicator l(Oi ) of proximity of the project Oi to the hypothetically best project O+ , calculated simultaneously for all 16 criteria S 11 -S 17 , S 21 -S 26 , S 31 -S 33 . The second ranking of projects is built using the Borda procedure. This ranking combines 16 separate rankings of projects that are built for every criterion with the ARAMIS method. All projects are ordered by value of the Borda rank f B (Oi ), averaged for each project by ranks in individual rankings. The third ranking of projects is built using the method of lexicographic ordering of multi-attribute objects, which is based on their sequential comparison according to a number of separate estimate grades. Firstly, objects are ordered by number of the high scores (HS) or first places, then by number of the middle scores (MS) or second places, then by number of the low scores (LS) or third places. The fourth result is a partition of project collection into two ordered groups Da (the best projects) and Da (the worst projects), which are constructed with the CLAVAHI method for collective hierarchical clustering of multi-attribute objects by their proximity in a multiset metric space. As follows from Table 7.8, the head parts of two rankings, that are built with the ARAMIS and lexicographic ordering methods, completely coincide. These parts include eight projects O30 , O02 , O08 , O38 , O13 , O14 , O34 , O11 out of ten ones recommended by experts for funding and one rejected project O18 . The same part of ranking constructed with the Borda procedure differs very little from two rankings above. Only two pairs of projects O38 , O13 and O14 , O34 have a rearrangement of places, which practically does not change the general order of projects. The middle and tail parts of all three rankings, which include the rest of rejected projects and two projects O05 , O31 recommended for support, also coincide quite well. Partition of applications into two clusters is somewhat different from the rankings. The first, more preferable cluster Da of the best projects consists of 14 projects, including 7 supported and 7 rejected. The second, less preferable cluster Db of the worst projects contains 25 projects, of which 3 are supported and 22 rejected. All projects in any cluster are equivalent, so their order does not matter. The cluster Da includes the same supported projects as the three rankings above, with the exception of the project O34 , which is included in the cluster Db together with the supported projects O05 and O31 . A slight difference between clustering and orderings of projects can be explained by the fact that projects are combined into groups according to another principle, namely, according to the formal proximity of multisets representing these projects. The above results almost completely coincided with the conclusion of the RHSF Expert Council on project support. This indicates a fairly high reliability and

7.4 Project Competition in Scientific Foundation

189

Table 7.8 Rankings of RHSF competition projects by different methods Borda

ARAMIS Oi

Rec

l(Oi )

Oi

Lexicography Rec

f B (Oi )

Oi

Rec

HS

Clustering MS

LS

Oi

Rec

Dg

O30

yes

0.09

O30

yes

1.00

O30

yes

46

5

0

O30

yes

a

O02

yes

0.11

O02

yes

1.30

O02

yes

45

6

0

O02

yes

a

O18

no

0.14

O18

no

1.60

O18

no

43

8

0

O18

no

a

O08

yes

0.15

O08

yes

1.95

O08

yes

42

9

0

O08

yes

a

O38

yes

0.16

O13

yes

2.10

O38

yes

41

10

0

O38

yes

a

O13

yes

0.18

O38

yes

2.10

O13

yes

40

11

0

O13

yes

a

O14

yes

0.18

O34

yes

2.30

O14

yes

40

10

1

O14

yes

a

O34

yes

0.18

O14

yes

2.40

O34

yes

40

10

1

O11

yes

a

O11

yes

0.19

O11

yes

2.45

O11

yes

39

12

0

O16

no

a

O16

no

0.24

O16

no

3.15

O16

no

35

16

0

O21

no

a

O28

no

0,25

O28

no

3.60

O21

no

34

15

2

O24

no

a

O21

no

0.26

O04

no

3.80

O28

no

34

17

0

O28

no

a

O04

no

0.28

O21

no

3.93

O04

no

32

18

1

O36

no

a

O24

no

0.28

O24

no

4.00

O05

yes

32

14

5

O35

no

a

O36

no

0.28

O36

no

4.15

O24

no

32

17

2

O01

no

b

O05

yes

0.29

O35

no

4.60

O36

no

31

20

0

O03

no

b

O10

no

0.30

O10

no

4.65

O10

no

29

22

0

O04

no

b

O26

no

0.32

O05

yes

4.77

O25

no

28

19

4

O05

yes

b

O35

no

0.32

O17

no

4.85

O26

no

28

20

3

O06

no

b

O17

no

0.33

O26

no

4.85

O17

no

27

22

2

O07

no

b

O25

no

0.33

O25

no

5.05

O31

yes

27

18

6

O09

no

b

O07

no

0.34

O31

yes

5.29

O35

no

27

24

0

O10

no

b

O31

yes

0.35

O07

no

5.42

O07

no

26

22

3

O12

no

b

O15

no

0.36

O15

no

5.48

O15

no

25

21

5

O15

no

b

O20

no

0.37

O20

no

5.79

O06

no

24

21

6

O17

no

b

O06

no

0.38

O32

no

5.95

O20

no

24

22

5

O19

no

b

O32

no

0.38

O06

no

5.98

O39

no

23

23

5

O20

no

b

O39

no

0.38

O39

no

6.07

O32

no

22

25

4

O22

no

b

O23

no

0.40

O23

no

6.40

O23

no

21

24

6

O23

no

b

O19

no

0.42

O19

no

6.70

O19

no

19

25

7

O25

no

b

O22

no

0.42

O37

no

6.75

O22

no

19

26

6

O26

no

b

O37

no

0.42

O22

no

7.12

O37

no

15

34

2

O27

no

b

O01

no

0.50

O12

no

8.30

O01

no

12

27

12

O29

no

b

O12

no

0,50

O01

no

8.38

O27

no

12

24

15

O31

yes

b

O27

no

0.52

O27

no

8.89

O12

no

10

31

10

O32

no

b

(continued)

190

7 Practical Applications of Choice Methods

Table 7.8 (continued) Borda

ARAMIS Oi

Rec

l(Oi )

Oi

Lexicography Rec

f B (Oi )

Oi

Rec

HS

Clustering MS

LS

Oi

Rec

Dg

O33

no

0.52

O33

no

8.92

O09

no

7

26

18

O33

no

b

O09

no

0.57

O09

no

9.88

O33

no

7

34

10

O34

yes

b

O29

no

0.65

O29

no

11.92

O29

no

2

24

25

O37

no

b

O03

no

0.71

O03

no

13.35

O03

no

1

19

31

O39

no

b

adequacy of the methods used for processing expert assessments. All four methods gave very similar results. A stable presence of the rejected project O18 among the supported ones (the third place in all rankings and place in the best cluster Da ), as well as a stable presence of the supported projects O05 , O31 among the rejected ones (in very middle places) indicate the urgent need for additional argumentation of the decisions on these projects made by the Expert Council. Studying results of competitive application selection using the methods of verbal decision analysis made it possible to identify some “bottlenecks” in the expertise rules established in the RFBR and RHSF [22]. It is desirable that the Expert Council of the Foundation, when making decisions on supporting or rejecting projects, would formulate its policy in the form of certain requirements for application quality, which can be given as restrictions on criteria estimates. Basing on assessments and conclusions of experts, Council would also take into account a project compliance with this policy. For a reliable classification of projects, it is also important how many experts examine each project. If only two experts assess an application, then it is possible that a stalemate may arise when one expert supports a project and the other rejects it. Greater reliability of collective recommendation is ensured when there are at least three experts. Construction of decision rules for classification is largely simplified if the same number of experts evaluates a project.

References 1. Petrovsky, A.B.: Decision support system for R&D management problems. In: Methodology and Software for Interactive Decision Support, pp. 265–270. Springer-Verlag, Berlin (1989) 2. Petrovsky, A.B., Sternin, M.Yu., Shepelev, G.I., et al.: Razrabotka sistemy analiza sostoyaniya i perspektiv razvitiya nauchnykh issledovaniy dlya otdeleniy AN SSSR estestvenno-nauchnogo profilya (Development of a system for analyzing the state and prospects of scientific research for the natural sciences branches of the USSR Academy of Sciences). Research report. VNIISI (All-Union Scientific Institute for System Research), Moscow (1991) (in Russian) 3. Larichev, O.I., Minin, V.A., Petrovsky, A.B., Shepelev, G.I.: Rossiyskaya fundamental’naya nauka v tret’em tysyacheletii (Russian basic science in the third millennium). Vestnik Rossiyskoy akademii nauk (Bulletin of the Russian Academy of Sciences) 71(1), 13–18 (2001). (in Russian)

References

191

4. Petrovsky, A.B., Pogodaev, G.V., Filippov, O.V.: Prognozirovanie i kompleksnoe planirovanie meditsinskoy nauki v SSSR (Forecasting and complex planning of medical science in the USSR). Ed. by Chernukh. A.M. Meditsina, Moscow (1984) (in Russian) 5. Petrovsky, A.B., Raushenbach, G.V., Pogodaev, G.V.: Mnogokriterial’nie ekspertnie otsenki pri formirovanii nauchnoy politiki (Multicriteria expert assessments in the formation of scientific policy). Problemy i metody prinyatiya resheniy v organizatsionnykh sistemakh upravleniya (Problems and methods of decision making in organizational management systems). Abstracts of the All-Union Conference, pp. 84–85. VNIISI (All-Union Scientific Institute for System Research), Moscow, Zvenigorod (1981) (in Russian) 6. Petrovsky, A.B., Sternin, M.Yu., Filippov, O.V., Morgoev, V.K., et al.: Razrabotka sistemy analiza, kompleksnogo planirovaniya i koordinatsii meditsinskikh nauchnykh issledovaniy po problemam soyuznogo znacheniya (Development of a system for analysis, integrated planning and coordination of medical research on the problems of All-Union significance). Research report. VONTS AMN USSR (All-Union Oncology Research Center), Moscow (1979) (in Russian) 7. Petrovsky, A.B., Sternin, M.Yu., Morgoev, V.K., Pogodaev, G.V.: Avtomatizirovannaya sistema obrabotki informatsii dlya Prezidiuma AMN SSSR (Automated system of processing information for the Presidium of the USSR Academy of Medical Sciences). Primenenie matematicheskikh metodov, vychislitel’noy tekhniki i avtomatizirovannykh sistem upravleniya v zdravookhranenii. Tezisy dokladov Vsesoyuznoy konferentsii (Application of mathematical methods, computer technology and automated control systems in health care. Abstracts of the All-Union Conference), pp. 37–40. The USSR Ministry of Health, Moscow (1981) (in Russian) 8. Larichev, O.I., Prokhorov, A.S., Petrovsky, A.B., Sternin, M.Yu., Shepelev, G.I.: Opyt planirovaniya fundamental’nykh issledovaniy na konkursnoy osnove (Experience of planning basic research on a competitive basis). Vestnik Akademii nauk SSSR (Bulletin of the Russian Academy of Sciences) 7, 51–61 (1989) (in Russian) 9. Petrovsky, A.B., Rumyantsev, V.V., Shepelev, G.I.: Sistema podderzhki poiska resheniya dlya konkursnogo otbora (Support system for searching a solution for competitive selection). Nauchno-tekhnicheskaya informatsiya. Seriya 2 (Scientific and Technical Information. Series 2) 3, 46–51 (1998) (in Russian) 10. Petrovsky, A.B., Shepelev, G.I.: Sistema podderzhki prinyatiya resheniy dlya konkursnogo otbora nauchnykh proektov (Decision support system for competitive selection of scientific projects). Problemy i metody prinyatiya unikal’nykh i povtoryayushchikhsya resheniy (Problems and methods of making unique and repeated decisions). VNIISI (All-Union Scientific Institute for System Research), Moscow, pp. 25–31 (1990) (in Russian) 11. Petrovsky, A.B., Shepelev, G.I.: Competitive selection of R&D projects by a decision support system. In: User-oriented Methodology and Techniques of Decision Analysis and Support, pp. 288–293. Springer-Verlag, Berlin (1993) 12. Petrovsky, A.B., Shepelev, G.I., Prokhorov, A.S., et al.: Razrabotka sistemy informatsionnoanaliticheskogo soprovozhdeniya gosudarstvennoy nauchno-tekhnicheskoy programmy po vysokotemperaturnoy sverkhprovodimosti (Development of a system for information and analytical support of the State scientific and technical program on high-temperature superconductivity). Research report. VNIISI (All-Union Scientific Institute for System Research), Moscow (1990) (in Russian) 13. Petrovsky, A.B.: Gruppovoy verbal’niy analiz resheniy (Group Verbal Decision Analysis). Nauka, Moscow (2019).(in Russian) 14. Komarova, N.A., Petrovsky, A.B.: Metod soglasovannoy gruppovoy klassifikatsii mnogopriznakovykh ob”yektov (Method of Consistent Group Classification of Multi-attribute Objects). In: Podderzhka prinyatiya resheniy. Trudy Instituta sistemnogo analiza RAN. (Decision support. Proceedings of the Institute for System Analysis of the Russian Academy of Sciences) vol. 35, pp. 19–32. LKI Publishing House, Moscow (2008) (in Russian) 15. Petrovsky, A.B.: Method for approximation of diverse individual sorting rules. Informatica 12(1), 109–118 (2001)

192

7 Practical Applications of Choice Methods

16. Petrovsky, A.B.: Multi-attribute sorting of qualitative objects in multiset spaces. In: Multiple Criteria Decision Making in the New Millenium. Lecture Notes in Economics and Mathematical systems, vol. 507, pp. 124–131. Springer-Verlag, Berlin (2001) 17. Petrovsky, A.B.: Multiple criteria project selection based on contradictory sorting rules. In: Information Systems Technology and its Applications. Lecture Notes in Informatics, vol. 2, pp. 199–206. Gesellshaft für Informatik, Bonn (2001) 18. Shi, Y., Wise, M., Luo, M., Lin, Y.: Data mining in credit card portfolio management: a multiple criteria decision making approach. In: Multiple Criteria Decision Making in the New Millennium. Lecture Notes in Economics and Mathematical Systems, vol. 507, pp. 427–436. Springer-Verlag, Berlin (2001) 19. Petrovsky, A.B.: Model’ otsenki kreditosposobnosti vladel’tsev kreditnykh kart po protivorechivym dannym (Model for assessing the creditability of credit card holders based on contradictory data). Iskusstvenniy intellect (Artif. Intell.) 2, 155–161 (2004). (in Russian) 20. Petrovsky, A.B.: Multi-attribute classification of credit cardholders: multiset approach. Int. J. Manag. Decis. Mak. 7(2/3), 166–179 (2006) 21. Boychenko, V.S., Petrovsky, A.B., Pronichkin, S.V., Sternin, M.Yu., Shepelev, G.I.: Granty v nauke: nakoplenniy potentsial i perspektivy razvitiya (Grants in science: accumulated potential and development prospects) Ed. by Petrovsky A.B. Poly Print Service, Moscow (2014) (in Russian) 22. Petrovsky, A.B.: Ekspertiza nauchnykh proyektov kak kollektivniy mnogokriterial’niy vybor (Expertise of research projects as collective multicriteria choice). Vestnik RFFI (Bulletin of RFBR) 3(99), 34–43 (2018). (in Russian) 23. Petrovsky, A.B., Tikhonov, I.P.: Fundamental’nye issledovaniya, oriyentirovannye na prakticheskiy rezul’tat: podkhody k otsenke effektivnosti (Basic research focused on practical results: approaches to evaluate efficiency). Vestnik Rossiyskoy akademii nauk (Bulletin of the Russian Academy of Sciences) 79(11), 1006–1011 (2009). (in Russian) 24. Petrovsky, A.B., Tikhonov, I.P., Balyshev, A.V., Komarova, N.A.: Postroenie soglasovannykh gruppovykh pravil dlya konkursnogo otbora nauchnykh proektov (Construction of consistent group rules for the competitive selection of scientific projects). In: Sistemniy analiz i informatsionnye tekhnologii: Trudy tret’ey mezhdunarodnoy konferentsii (System Analysis and Information Technologies: Proceedings of the Third International Conference), pp. 337–348. Poly Print Service, Moscow (2009) (in Russian) 25. Petrovsky, A., Boychenko, V., Zaboleeva-Zotova, A., Shitova. T.: Mnogokriterial’nie metody konkursnogo otbora proyektov v nauchnom fonde (Multicriteria methods of competitive selection of projects in a scientific foundation). Int. J. Inf. Technol. Knowl. 9(1), 59–71 (2015) (in Russian)

Chapter 8

Practical Applications of Choice Technologies

New technologies of multicriteria choice with reduction of attribute space dimensionality are applicable for solving any types of tasks of individual and collective choice, and especially when we need in an integral indicator of assessment quality that aggregates primary heterogeneous characteristics of objects. This chapter includes examples of such practical tasks. These are assessment of research results, selection of a prospective computing complex, and evaluation of effectiveness of organization activities.

8.1 Assessment of Research Results Evaluation of possibilities for practical application of research results is one of the important functions of science management bodies. For this, it is necessary, first of all, to define the concept of “scientific resultativeness” itself. For example, we can interpret “research resultativeness” as an indicator of the direct use of scientific results in the economy sectors and other areas of activity, or identify it as a degree of implementation of target programs. Annually in the Russian Foundation for Basic Research (RFBR), many experts are reviewing reports on the supported projects. In order to assess results of goal-oriented basic research performed in interests of the Russia Federal agencies and departments, we proposed to build a generalized integral indicator of project results with verbal ordinal scale using the multistage PAKS technology [1–3]. Criteria adopted in the RFBR were taken as initial indicators. Peer review of any annual report includes an assessment of the results obtained and expected at the final stage of project upon 8 qualitative criteria. These criteria are as follows: K 1 Implementation degree of the declared tasks, K 2 Scientific level of obtained results, K 3 Patentability of obtained results, K 4 Prospects for using obtained results, K 5 Results expected at the final stage of project, K 6 Solution of the

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. B. Petrovsky, Group Verbal Decision Analysis, Studies in Systems, Decision and Control 451, https://doi.org/10.1007/978-3-031-16941-0_8

193

194

8 Practical Applications of Choice Technologies

declared tasks at the final stage of project, K 7 Difficulties of project performance, K 8 Interaction with potential users of project results. Ordinal or nominal scale X i of each criterion K i , i = 1, . . . , 8 has two or three grades of quality with detailed verbal formulations. For example, an implementation degree of the declared tasks is estimated upon the criterion K 1 as x 1 0 —tasks are solved completely, x 1 1 —tasks are solved partially, x 1 2 —tasks are not solved. Solution of the declared tasks at the final stage of project is estimated upon the criterion K 6 as x 6 0 —is really or x 1 1 —is not really. The total dimensionality of original attribute space X 1 × … × X 8 is equal to 1296. It is obviously that the direct classification of such number of eight-dimensional estimate tuples requires significant efforts of decision maker. To facilitate multicriteria selection and interpretation of project assessment, eight initial criteria have been combined into a single top-level indicator R Project resultativeness with a scale Z that has five ordered verbal gradations: z0 is superior, z1 is high, z2 is middle, z3 is low, z4 is unsatisfactory. We considered construction of generalized integral indicator of scientific project resultativeness as a task of multicriteria classification of objects in a high-dimensional attribute space, which was reduced with the HISCRA method for hierarchical aggregation of indicators. Combinations of project estimates upon criteria acted as multi-attribute objects, aggregated indicators played a role of decision classes. We built two schemes for aggregating estimates upon the initial criteria into estimates upon the intermediate composite indicators, which were combined into an integrated indicator of the top hierarchical level. Decision maker had opportunity to compare the integral indicators of resultativeness built in different ways. The first hierarchical system for aggregating indicators consists of three composite indicators: L 1 Level of obtained results, L 2 Level of expected results at the final stage of a project, L 3 Possibilities of using results in programs of the Russia Federal agencies and departments, and a single integral indicator R1 Project resultativeness (Fig. 8.1). The composite indicator L 1 combines scores upon the criteria K 1 , K 2 , K 3 ; composite indicator L 2 combines scores upon the criteria K 5 , K 6 , K 7 ; composite indicator L 3 combines scores upon the } 4 , K 8 . Composite indicators L j , j = { criteria K 1, 2, 3 have the same scale Y j =

y 0j , y 1j , y 2j with three grades: y 0j is high, y 1j is

middle, y 2j is low, determined by the content of appropriate indicators. level are collections of tuples ( eClassified ) (objects of) ( the first ) x11 , x2e2 , x3e3 , x5e5 , x6e6 , x7e7 , x4e4 , x8e8 in the space X 1 × ( . . . × X 8 )of scales f

f

f

of criteria K 1 − K 8 . Collections of all estimate grades y1 1 , y2 2 , y3 3 for the composite indicators L 1 , L 2 , L 3 in the space Y1 × Y2 × Y3 act as classified objects of the next { hierarchical level. } Then these objects were combined in grades of a scale Z 1 = z 10 , z 11 , z 12 , z 13 , z 14 of the single integral indicator R1 Project resultativeness, where z 10 is superior, z 11 is high, z 12 is middle, z 13 is low, z 14 is unsatisfactory. We formed scales of composite indicator using two methods: ORCLASS and tuple stratification. Scale grades for the indicators L 1 , L 2 , L 3 were constructed with the

8.1 Assessment of Research Results

K1

x10, x11, x12

K2

x20,

1

x2 , x2

2

195

L1 0

y1 , y11, y12

K3

x30, x31

K5

x50, x51

K6

x60,

K7

x70, x71

K4

x40, x41, x42

L3

K8

x80, x81, x82

y30, y31, y32

x6

1

L2 0

1

y2 , y2 , y2

R1 2

0

1

z1 , z1 , z12, z13, z14

Fig. 8.1 The first scheme for aggregating indicators and forming scales

ORCLASS method. Combinations of estimate grades for the composite indicators L 1 , L 2 , L 3 were aggregated into estimate grades for the integral indicator R1 Project resultativeness with the method of tuple stratification. Gradation y10 of the ( composite ) ( indicator) L(1 —high level ) ( of obtained ) results— is estimate tuples x10 , x20 , x30 , x10 , x20 , x31 , x10 , x21 , x30 (, x11 , x20 , x)30 (; gradation ) 1 x10 , x21 , x31 ), ( x10 , x22 , x31 ), level of obtained results—is estimate tuples (y1 —middle ) ( ) ( ) ( ) ( 1 0 1 1 1 1 x12 , x20 , x31 , x11 , x21 , x30 , x12 , x20 , x30 , x10 , x22 , x30 , (x12 , x21 , x30 ), (x11 , x22 , x30 ), 2 x1 , x2(, x3 , x1 , )x2(, x3 ; gradation results—is estimate ) ( y1 —low ) (level of obtained ) tuples x11 , x22 , x31 , x12 , x21 , x31 , x12 , x22 , x30 , x12 , x22 , x31 . Gradation y20 of the composite indicator L 2 —high level of expected results— ( ) 1 is a tuple of the best estimates x50 , x(60 , x70 ; gradation ) ( 0 1 y2 —middle ) ( 0 0 level ) 1 0 0 0 1 x , x , x , x , x , x of expected results—is estimate tuples 6 7 6 7 , x5 , x6 , x7 , 5 5 ( 0 1 1) ( 1 0 1) ( 1 1 0) x5 , x6 , x7 , x5 , x6 , x7 , x5 , x6(, x7 ; gradation y22 —low level of expected results— ) 1 1 1 is a tuple of the worst estimates x5 , x6 , x7 . Gradation y30 of the composite (indicator for using results— ) L 3 —big possibilities 1 y —middle for is a tuple of the best estimates x40 ,(x80 ; gradation 3 ) ( ) ( ) ( possibilities ) ( ) using results—is estimate tuples x40 , x81 , x41 , x80 , x40 , x82 , x42 , x80 , x41 , x81 ; 1 possibilities for using results—is estimate tuples (gradation ) ( 2y3 —small ) ( ) 1 2 x4 , x8 , x4 , x81 , x42 , x82 . 0 Gradation resultativeness D5 of project—is a tuple of the best esti( 0 0z 1 —superior ) 0 1 y ; , y , y gradation mates ( 1 0 10 ) (2 0 3 1 0 ) ( 0 0z 1 —high ) ( 1 resultativeness ) ( 1 0 D41—is ) ( 0estimate ) tuples 1 1 0 1 1 y , y , y y , y , y , y , y , y , y , y , y , , y , y , y , y , y , y 1 2 3 1 2 3 , ( 12 20 30 ) ( 10 22 30 ) ( 10 20 32 ) 1 2 3 2 y1 , y2 , y3 , y1 , y2 , y3 , y1 , y2 , y3 ; gradation z —middle 1 ( 1 0 2 ) ( 1 2 0 ) ( 0 resulta) 1 2 —is estimate tuples tiveness D 3 ( 0 2 1 ) ( 2 0 1 ) ( 2 1 0 ) ( 1 1 y1 ,1 y)2 , y3 , y1 , y2 , y33 , y1 , y2 , y3 , y1 , y2 , y3 , y1 , y2 , y3 , y1 , y2 , y3 , y1 , y(2 , y3 ; gradation ) ( 1 2 z11 )—low ( 2 1resul) 1 1 2 1 y , y , —is estimate tuples , y , y , y , y tativeness D 2 1 2 3 1 2 3 ( 2 2 0) ( 2 0 2) ( 0 2 2) ( 2 2 1 ) ( 2 1 2 ) ( y11 , y22 , y32 ), y1 , y2 , y3 , y1 , y2 , y3 , y1 , y2 , y3 , y1 , y2 , y3 , y1 , y2 , y3 , y1 , y2 , y3 ;

196

8 Practical Applications of Choice Technologies

K1

x10, x11, x12

K2

x20, x21, x22

K3 K4

0

x3 ,

x31

L4 0

y4 ,

y41,

y42, y43

x40, x41, x42

R2 z20, z21, z22, z23, z24

K5

x50, x51 0

x61

K6

x6 ,

K7

x70, x71

K8

x80, x81, x82

L5 0

y5 ,

y51,

y52, y53

Fig. 8.2 The second scheme for aggregating indicators and forming scales

4 gradation ( 2 2 2z)1 —unsatisfactory resultativeness D1 —is a tuple of the worst estimates y1 , y2 , y3 . The second hierarchical system for aggregating indicators corresponded to the questionnaire for a report review and consisted of two parts. The composite indicator L 4 Assessment of obtained results included estimates upon the criteria K 1 , K 2 , K 3 , K 4 . The composite indicator L 5 Assessment of expected results at the final stage of project included}estimates upon the criteria K 5 , K 6 , K 7 , K 8 (Fig. 8.2). { Scales Y j = y 0j , y 1j , y 2j , y 3j , j = 4, 5 of the indicators L 4 , L 5 had four grades:

yj 0 is high, yj 1 is middle, yj 2 is low, yj 3 is very low, which were classes of the first level solutions. Tuples of estimate grades for the composite indicators L 4 and L 5 were considered further}as classified objects of the next level, and grades of the scale { Z 2 = z 20 , z 21 , z 22 , z 23 , z 24 of the integral indicator R2 Project resultativeness, similar to grades of the scale Z 1 , acted as classes of decisions D5 , D4 , D3 , D2 , D1 . Thus, during classification, real projects assessed upon the initial criteria were directly assigned to five specified classes D5 , D4 , D3 , D2 , D1 . In the first way of constructing the integral resultativeness indicator, a decision maker, forming scales of the composite indicators L 1 , L 2 , L 3 , answered 16, 6 and 7 questions, respectively, forming a scale of the indicator R1 answered 22 questions. In the second way of constructing the integral resultativeness indicator, to form scales of the composite indicators L 4 and L 5 answers were required to 43 and 17 questions, respectively, to form a scale of the indicator R2 answers were required to 12 questions. The indicated numbers of questions asked by a DM is significantly less than when using other methods of multicriteria ordinal classification. The developed approach was applied to analyze a resultativeness of goal-oriented basic research completed in 2007 in the knowledge areas: Mathematics, informatics and mechanics (48 projects), Chemistry (54 projects), Information and telecommunication resources (21 projects) [1–5]. Each final report was evaluated by two experts upon eight original criteria K 1 –K 8 . The integral indicator R Project resultativeness was constructed in two ways described above.

8.1 Assessment of Research Results

197

Project efficiency Expert 2

1

OC

2

ST

3

ST+OC OC+ST

4 5 Project 1

Project 2

Project 3

Project 4

Project 5

Project 6

Project 7

Project 8

Project efficiency Expert 1

1

OC

2

ST

3

ST+OC OC+ST

4 5

Project 1

Project 2

Project 3

Project 4

Project 5

Project 6

Project 7

Project 8

Fig. 8.3 Multicriteria expert estimates of project resultativeness

Scale grades of the integral indicator (classes D5 , D4 , D3 , D2 , D1 of project resultativeness) were built with four different combinations of verbal decision analysis methods. Namely, we used M 1 —the ORCLASS method at all levels of hierarchy (OC); M 2 —the method of tuple stratification at all levels of hierarchy (ST); M 3 — tuple stratification at the lower level of hierarchy and ORCLASS at the upper level of hierarchy (ST + OS); M 4 —ORCLASS at the lower level of hierarchy and tuple stratification at the upper level of hierarchy (OC + ST). Figure 8.3 presents multicriteria estimates of resultativeness of some projects given by two experts. Analysis of project resultativeness showed the following. For instance, in the area of Chemistry, 6 projects had the superior resultativeness with the first way of constructing the integral indicator, and 16 projects with the second way; 40 and 75 projects, respectively, had the high resultativeness; 59 and 13 projects had the middle resultativeness; 1 and 2 projects had the low resultativeness; 2 and 2 projects had the unsatisfactory resultativeness. Thus, the second way for aggregating initial indicators gave a higher value of resultativeness indicator than the first way. Totally, according to estimates of the first and second experts respectively, values of the integral resultativeness indicator coincided in 74 and 48% of cases (for area of Mathematics, informatics and mechanics); in 72 and 24% of cases (for area of Chemistry); in 76 and 62% of cases (for area Information and telecommunication resources). In other cases, values of the integral resultativeness indicator differed by no more than one gradation, which can be considered as evidence of a sufficiently high stability of the final results for assessment of project resultativeness by initial data and suggested ways to construct composite indicators at all levels of hierarchy. The most resultative projects were determined using the ARAMIS method for group ordering of multi-attribute objects, assuming that all experts are equally competent, and an importance of criteria is equal for all experts. We consider classes D5 , D4 , D3 , D2 , D1 of project resultativeness, obtained by combination of methods

198

8 Practical Applications of Choice Technologies

} { M 1 , M 2 , M 3 , M 4 , as new attributes N j = n 5j , n 4j , n 3j , n 2j , n 1j , j = 1, . . . , 4 characterizing a project. Values n 5j , n 4j , n 3j , n 2j , n 1j of the attribute N j correspond to gradations z 0 , z 1 , z 2 , z 3 , z 5 of rating scale Z of the resultativeness indicator. Then each project Oi is represented by a multiset ( ) ( ) ( ) } { ( ) Ai = k Ai n 51 ◦ n 51 , . . . , k Ai n 11 ◦ n 11 ; . . . ; k Ai n 54 ◦ n 54 , . . . , k Ai n 14 ◦ n 14 . . . over( the) set N = N1 N2 N3 N4 of attributes. Here, multiplicity g g k Ai j n j j , g j = 5, . . . , 1 of element n j j in the multiset Ai shows how many times the method M j was used to form a grade z g j of the corresponding resultativeness class by estimates of all experts. For instance, the projects O1 , O2 (Fig. 8.3) are presented by multisets { A1 = 1 ◦ n 51 , 0 ◦ n 41 , 1 ◦ n 31 , 0 ◦ n 21 , 0 ◦ n 11 ; 1 ◦ n 52 , 1 ◦ n 42 , 0 ◦ n 32 , 0 ◦ n 22 , 0 ◦ n 12 ; } 1 ◦ n 53 , 0 ◦ n 43 , 1 ◦ n 33 , 0 ◦ n 21 , 0 ◦ n 11 ; 1 ◦ n 54 , 1 ◦ n 44 , 0 ◦ n 34 , 0 ◦ n 24 , 0 ◦ n 14 { A2 = 0 ◦ n 51 , 1 ◦ n 41 , 1 ◦ n 31 , 0 ◦ n 21 , 0 ◦ n 11 ; 0 ◦ n 52 , 2 ◦ n 42 , 0 ◦ n 32 , 0 ◦ n 22 , 0 ◦ n 12 ; } 0 ◦ n 53 , 1 ◦ n 43 , 1 ◦ n 33 , 0 ◦ n 21 , 0 ◦ n 11 ; 0 ◦ n 54 , 2 ◦ n 44 , 0 ◦ n 34 , 0 ◦ n 24 , 0 ◦ n 14 over the set N of attributes of the methods M 1 , M 2 , M 3 , M 4 , used by two experts in assessing a project resultativeness. The best O+ and worst O– projects (possibly hypothetical) with the highest and lowest estimates of resultativeness for all methods are specified as multisets A+ ={2 ◦ n 51 , 0 ◦ n 41 , 0 ◦ n 31 , 0 ◦ n 21 , 0 ◦ n 11 ; 2 ◦ n 52 , 0 ◦ n 42 , 0 ◦ n 32 , 0 ◦ n 22 , 0 ◦ n 12 ; 2 ◦ n 53 , 0 ◦ n 43 , 0 ◦ n 33 , 0 ◦ n 21 , 0 ◦ n 11 ; 2 ◦ n 54 , 0 ◦ n 44 , 0 ◦ n 34 , 0 ◦ n 24 , 0 ◦ n 14 } A− ={2 ◦ n 51 , 0 ◦ n 41 , 0 ◦ n 31 , 0 ◦ n 21 , 0 ◦ n 11 ; 2 ◦ n 52 , 0 ◦ n 42 , 0 ◦ n 32 , 0 ◦ n 22 , 0 ◦ n 12 ; 2 ◦ n 53 , 0 ◦ n 43 , 0 ◦ n 33 , 0 ◦ n 21 , 0 ◦ n 11 ; 2 ◦ n 54 , 0 ◦ n 44 , 0 ◦ n 34 , 0 ◦ n 24 , 0 ◦ n 14 } Projects were considered as points of a multiset metric space and were ordered by value of proximity indicator li of a project Oi to the best project O+ . The final ranking of projects by resultativeness, for example, in the knowledge area Mathematics, informatics and mechanics, looked like this: 23 projects formed the head part of the ranking (li = 0.333), 1 project formed the middle part (li = 0.429), 24 projects formed the tail part (li = 0.500). Approbation of the proposed approach to evaluating resultativeness of scientific works confirmed its effectiveness. High resultative projects of goal-oriented basic research were identified that made it possible to use more intensively the results obtained in practice.

8.2 Selection of Prospective Computing Complex

199

8.2 Selection of Prospective Computing Complex At present, personal computing complexes are increasingly used in solving various fundamental and applied problems, and are considered as an alternative to expensive high-performance computing clusters and supercomputers. Such complexes are quite compact, their performance is several hundred gigaflops or units of teraflops, power consumption usually does not exceed several kilowatts, and operation requirements are significantly lower than that of supercomputers. A cost of creating computing complexes is constantly decreasing due to the mass production of standard components and growing competition among manufacturers. All this makes personal complexes an effective tool in tasks that require processing big data. Modern and relatively inexpensive microprocessors, network technologies and peripheral devices allow a user to build computing complexes of various configurations, which meet his/her requirements on performance, cost, power consumption, dimensions, weight, and other parameters. To provide the necessary characteristics and compliance of a complex with changing needs of a user, complex configuration can be flexibly changed and increased by adding new computing modules. Therefore, a user solving an applied problem faces a difficult task of selection of the most suitable computing complex. Comparison and selection of complicated complexes and systems, in particular technical ones, is a rather difficult, ill-structured and poorly formalized task of strategic choice. This is caused the fact that such complexes are characterized by a large number of indicators, and selection is made upon many quantitative and qualitative criteria. At the same time, as a rule, there are few considered samples of complexes, which are usually incomparable in their numerous parameters. In order to determine the most prospective computing complex, we used the multimethod PAKS-M technology for multicriteria choice in high-dimensional attribute space [6–9]. Previously, a special analysis of the indicators characterizing computing complexes was carried out. 30 indicators were selected as the initial ones, which formed the following groups: U 1 Complex productivity; U 2 Cost of complex manufacturing; U 3 Computational characteristics of complex: number of modules in complex; rate of exchange between modules; presence of uninterruptable power supply; presence of built-in input–output facilities; software characteristics of complex; possibility for upgrading complex hardware; possibility for upgrading complex software; U 4 Structural characteristics of complex: complex dimensions (height, width, depth); complex mass; protection against interference; U 5 Operational characteristics of complex: energy consumption; noise level; heat generation; operating conditions (temperature, humidity); time between failures; U 6 Technical characteristics of module: processor core frequency; bitness of processor core; number of streams; number of processor cores; amount of RAM supported by processor; number of processors in module; amount of RAM

200

8 Practical Applications of Choice Technologies

of module; presence of accelerator for universal calculations; disk memory of module; presence of optical data storage unit in module. } initial indicator, we formed a verbal rating scale with three grades Ui = { 0For1each u i , u i , u i2 , i = 1, . . . , 30. For example, Complex productivity was graded as u1 0 — high (>2000 Gflops), u1 1 —middle (2000–500 Gflops), u1 2 —low ( O2 > O3 . According to the method of lexicographic ordering, the complex O1 is preferable to the complex O3 , and the complex O3 is preferable to the complex O2 : O1 > O3 > O2 According to the method of weighted sums of ranks, the complexes O2 and O3 differ insignificantly in a sum of ranks and are approximately equivalent, and the complex O1 is preferable to both of them: O1 > O2 ≈ O3 . The generalized ordering of complexes, built by the Borda procedure, has the form: O1 > O2 ≈ O3 . Thus, the complex O1 is more preferable than complexes O2 and O3 . This result is explained as follows. Comparing the complex estimates upon five criteria, we see that the complex O1 has two high (zero grade on a scale) scores upon the criteria SI and OI, two middle (first grade on a scale) scores upon the criteria CP and CM, as well as one low (second grade on a scale) score upon the criterion CI. Prevalence of high and medium estimates for most of the criteria puts the complex O1 in the first place by preference in comparison with the complexes O2 and O3 . The complexes O2 and O3 are inferior to the complex O1 by estimates and approximately equivalent. Multicriteria estimates of computing complexes O1 , O2 , O3 presented as element multiplicities of multisets Ai (verbal estimates of three experts upon three criteria CP, CM, GI), and the results of complex comparison by different methods according to the second scheme for aggregation of indicators are given in Table 8.4. As follows from Table 8.4, according to the ARAMIS method, the complex O1 is preferable to the complex O2 , and the complex O2 is preferable to the complex O3 : O1 > O2 > O3 . According to the method of lexicographic ordering, the complex Table. 8.3. Complex estimates presented as multisets and results of complex comparison first scheme for aggregation of indicators u1 0 u1 1 u1 2

u2 0 u2 1 u2 2

v3 0 v3 1 v3 2

v4 0 v4 1 v4 2

v5 0 v5 1 v5 2

li

HS MS LS



O1

0

3

0

0

3

0

0

0

3

3

0

0

3

0

0

0.43

6

6

3

33

O2

0

0

3

3

0

0

1

2

0

0

0

3

0

3

0

0.55

4

5

6

28

O3

3

0

0

0

0

3

3

0

0

0

0

3

0

0

3

0.60

6

0

9

27

Table. 8.4 Complex estimates presented as multisets and results of complex comparison (second scheme for aggregation of indicators) u1 0 u1 1 u1 2

u2 0 u2 1 u2 2

w3 0 w3 1 w3 2

li

HS MS LS



O1

0

3

0

0

3

0

3

0

0

0.40

3

6

0

21

O2

0

0

3

3

0

0

0

1

2

0.60

3

1

5

16

O3

3

0

0

0

0

3

0

0

3

0.67

3

0

6

15

206

8 Practical Applications of Choice Technologies

O1 is preferable to the complexes O2 and O3 that are approximately equivalent: O1 > O2 ≈ O3 . According to the method of weighted sums of ranks, the complex O1 is preferable to the complexes O2 and O3 , which differ insignificantly in a sum of ranks and are approximately equivalent: O1 > O2 ≈ O3 . The generalized ordering of complexes, built by the Borda procedure, has the form O1 > O2 > O3 and slightly differs from the ordering obtained by the first scheme for aggregation of indicators. Thus, the complex O1 is preferable to the complex O2 , the complex O2 is preferable to the complex O3 . Comparing complexes by the integral index R3 CC, according to the third scheme for aggregation of criteria, showed that O1 > O2 ≈ O3 . This result agrees with the result of comparing complexes by five criteria and is explained as follows. According to the criterion GI, the complex O1 has the high score because it has the best structural and operational characteristics, despite low computational characteristics. The complexes O2 and O3 were noticeably inferior to the complex O1 in these parameters. Comparing complexes by the integral index R4 CC, according to the fourth scheme for aggregation of criteria, showed that O1 > O2 > O3 . This result, in general, agrees with the result of comparing complexes by three criteria, but does not give a clear understandable explanation. Comparing complexes by the integral index R5 CC, according to the fifth scheme for aggregation of criteria, showed that O1 > O2 ≈ O3 , that is not quite consistent with the comparison of complexes by five criteria. This result is due to the fact that the intermediate criterion CF has equalized the complexes O2 and O3 . In addition, the criteria CF and MC are more complicated for explanation of the obtained result. Comparing complexes by the integral index R6 CC, according to the sixth scheme for aggregation of criteria, showed that O1 > O2 ≈ O3 . This result agrees with the result of comparing complexes by five criteria and the third scheme for aggregation of criteria. So, the results of complex choice by a single index, built according to the third and sixth schemes, agree with each other and with the complex estimates upon five and three criteria. On the contrary, the results obtained according to the third and fourth schemes differ both among themselves and from the results of evaluating complexes upon five and three criteria. In general, for all schemes for aggregation of indicators, we have qualitatively the same result, namely: the computing complex O1 is the most prospective. We obtain the similar result in another way, using the Borda procedure, if construct a new generalized ranking of complexes, which combines six final rankings according to six aggregation schemes. The described PAKS-M technology is included in the technique for modeling, design, diagnostics and testing of high-performance computing systems and radioelectronic systems for various purposes. This technique is used at Joint Stock Company “M.A. Kartsev Scientific Research Institute of Computing Complexes”.

8.3 Evaluation of Organization Activity Effectiveness

207

8.3 Evaluation of Organization Activity Effectiveness Effectiveness of organization is a difficult formalized concept, which is characterized by many quantitative and qualitative factors of various natures. Examination and evaluation of organization, as a rule, includes an analysis of documents reflecting the production, economic and financial aspects of activities. The required information is usually collections of many indicators, which are often combined into one or more numerical generalized indices of efficiency, calculated according to some formulas without any justification. However, the results of evaluation of organization activities using numerical indicators requires additional explanations, for which highly qualified specialists are involved in an expertise. For expert assessment and analysis of effectiveness of scientific organizations that perform basic research, we proposed a methodological approach based on methods and technologies of group verbal decision analysis [10]. For this, together with experts of the Expert and Analytical Center, the Ministry of Education and Science of the Russian Federation, we developed 11 quality criteria. Each criterion had an ordinal or nominal rating scale with detailed verbal formulations of gradations, which specify various aspects of the research, economic and financial activities of scientific organization. The group ‘Assessment of the scientific activity of organization’ consists of five criteria characterizing the scientific results, the composition of scientific personnel, and state of scientific equipment. T11 Level of scientific results: 1 t11 —higher than foreign achievements; 2 t11 —at the level of foreign achievements; 3 t11 —lower than foreign achievements. T12 Recognition of scientific results: 1 t12 —scientific discovery or international prize was received; 2 t12 —state prize was received; the scientific advisor is marked with a state or departmental award; 3 t12 —research results are not recognized by a scientific community. T13 Qualification of scientific personnel: 1 t13 —more than 60% of researchers have academic degrees, while doctors of science are at least 30% of all researchers with academic degrees; 2 t13 —40–60% of researchers have academic degrees, while doctors of science are at least 30% of all researchers with academic degrees; 3 t13 —less than 40% of researchers have academic degrees, while doctors of science are less than 30% of all researchers with academic degrees. T14 Age of researchers: 1 t14 —the average age of researchers is less than 35 years old; 2 t14 —the average age of researchers is from 35 to 45 years old; 3 t14 —the average age of researchers is more than 45 years old. T15 Upgrade rate of scientific equipment: 1 t15 —the average age of the main scientific equipment is less than 5 years;

208

8 Practical Applications of Choice Technologies

2 t15 —the average age of the main scientific equipment is from 5 to 10 years; 3 t15 —the average age of the main scientific equipment is more than 10 years. The group ‘Assessment of the financial and economic activities of organization’ includes six criteria reflecting the balance sheet, which is compiled according to the official methodology for calculating effectiveness of financial and economic activities, and the plan of organization development. T21 Reliability of organization balance sheet associated with a property; balance sheet compliance with the statutory and constituent documents: 1 t21 —all property of organization (land plots, buildings, equipment) has the state registration (or a lease agreement, if leased), is not contested and is not contesting in court; there are no decisions of the executive and judicial authorities on a transfer, division of property; inventory, audits, inspections by institutions of the state financial control and state property confirm a compliance of title documents with the statutory and constituent documents; 2 t21 —all property of organization at a cost (land plots, buildings, equipment) has the state registration (or a lease agreement, if leased), is not contested and is not contesting in court; there are no decisions of the executive and judicial authorities on a transfer, division of property; inventory, audits, inspections by institutions of the state financial control and state property confirm a compliance of title documents with the statutory and constituent documents; all changes are indicated in the financial reports; 3 t21 —all property of organization or a significant part of property does not have the state registration, is contested or contesting in court; there are decisions of the executive or judicial authorities on a transfer, division of property, not all property belonging to an organization is included in the financial reports. T22 General indicators of balance sheet: 1 t22 —there is a tendency to increasing organization assets and to decreasing receivables and payables; the coefficient of cash assets share in proceeds is equal to 1; borrowed capital in a value of organization property sources is no more than 10% (only for commercial enterprise); 2 t22 —indicators of a value of organization property, receivables and payables are stable; the coefficient of cash assets in proceeds is not less than 0.9; borrowed capital in the value of organization property sources is no more than 25% (only for commercial enterprise); 3 t23 —there is a tendency to decreasing organization assets and to increasing receivables and payables; the coefficient of cash assets share in proceeds is less than 0.9; borrowed capital in a value of organization property sources is more than 25% (only for commercial enterprises). T23 Solvency and financial stability: 1 t23 —organization is solvent and financially stable; 2 t23 —organization is insolvent and financially unstable. T24 Efficiency of using current capital (business activity), profitability and financial results (profitableness): 1 t24 —current capital is used effectively; 2 t24 —current capital is used ineffectively.

8.3 Evaluation of Organization Activity Effectiveness

209

T25 Efficiency of using non-current capital, investment activity of organization: 1 t25 —non-current capital is used effectively, investment activity of organization is high; 2 t25 —non-current capital is used ineffectively, investment activity of organization is low. T26 Quality of the organization business plan (development plan): 1 t26 —business plan includes all confirmed documents justifying improvement of financial condition of organization; 2 t26 —business plan is not confirmed by documents justifying improvement of financial condition of organization; 3 t26 —business plan has not been developed. Using the HISCRA method for reducing the dimensionality of attribute space, we constructed integral indicators of effectiveness. These are E 0 Complex indicator of effectiveness of organization activities, E 1 Indicator of effectiveness of the research activity of organization, E 2 Indicator of effectiveness of the financial and economic activities of organization. Verbal scale gradations of these indicators are not shown here for reasons of information confidentiality. Integral indicators present appropriate characteristics of activities of scientific organization in an aggregated form. In 2008, the proposed approach was used in the Expert and Analytical Center, the Ministry of Education and Science of the Russian Federation in the technique for monitoring activities of federal state unitary enterprises that perform basic research. The above criteria were included in a special questionnaire of expert review to assess the research, economic and financial activities of organization. Several experts evaluated each organization. All experts were considered equally competent, and criteria importance for all experts was equal. Expert estimates of organizations, given by multisets into the form (7.2) over the set X = T11 ∪ . . . T15 ∪ T21 ∪ . . . ∪ T26 of grades of criteria rating scales, were processed using the PAKS technology for multicriteria choice and the ARAMIS method for group ordering of objects. Organizations were ranked by the integral efficiency indicators E 0 , E 1 , E 2 , and indicator of proximity to the hypothetically most effective organization. Scientific organizations, most effective both in general and in individual kinds of activity, were identified. These data became a basis to prepare reasonable recommendations on a provision of loan or other form of support to a state organization. Results of the technique approbation confirmed a working capacity of the proposed approach. Another demanded practical task is a formation of ratings of companies doing business in various fields of activity. In 2000, we conducted an expert survey to build a rating of Russian companies that operate in the information and communication technologies sector [11, 12]. Initially, after processing information obtained from the media, Internet, reference materials and other resources, about 150 candidate companies were chosen, of which experts selected 50 companies for evaluation. It is known that it is not always possible to obtain exact and reliable reports on company activities. Therefore, the following special criteria for evaluating a company have been developed: H 1 Level of business activity, H 2 Volume of sales, H 3 Volume

210

8 Practical Applications of Choice Technologies

of profit from product sales, H 4 Number of completed projects, H 5 Personnel qualifications, H 6 Number of company employees, and the like. Both quantitative and qualitative scales of criteria were used. For a convenience of evaluating and comparing companies, the numerical scales were transformed into the verbal scales with a small number of gradations. So, for example, a scale of the criterion H 4 Number of completed projects looked like: h 14 —very high (more than 100); h 24 —high (from 50 to 100); h 34 —middle (from 10 to 50); h 44 —low (less than 10). Scales of other criteria had similar estimate grades. In order to improve reliability of survey results, a representative of each company was an expert who evaluated all the companies under consideration, including his/her own company. Estimates of various experts very differed from each other and were even contradictory. The obtained collective assessments of . all companies were described by multisets into the form (7.2) over the set X = H1 . . . ∪ H6 of grades of criteria rating scales. Expert judgments were processed by the ARAMIS method, considering all experts to be equally competent, and criteria importance for all experts to be equal. As the result, 30 leading high-tech companies in the sector of information and communication were identified, as well as ratings of 10 leading software developers and 10 most dynamically developing companies were compiled [11]. The expertise technique was approved by the reputable international consulting firm KPMG.

References 1. Boychenko, V.S., Petrovsky, A.B., Pronichkin, S.V., Sternin, M.Y., Shepelev, G.I.: Granty v nauke: nakoplenniy potentsial i perspektivy razvitiya (Grants in science: accumulated potential and development prospects) Ed. by Petrovsky A.B. Moscow: Poly Print Service (2014) (in Russian) 2. Petrovsky, A.B., Roizenzon, G.V., Tikhonov, I.P., Balyshev, A.V.: Mnogokriterial’naya otsenka rezul’tativnosti nauchnykh proektov (Multicriteria assessment of the effectiveness of scientific projects). In: Sistemniy analiz i informatsionnye tekhnologii: Trudy tret’ey mezhdunarodnoy konferentsii (System analysis and information technologies: Proceedings of the third international conference), pp. 329–336. Poly Print Service, Moscow (2009) (in Russian) 3. Petrovsky, A.B., Tikhonov, I.P.: Fundamental’nye issledovaniya, oriyentirovannye na prakticheskiy rezul’tat: podkhody k otsenke effektivnosti (Fundamental research focused on practical results: approaches to evaluate efficiency). Vestnik Rossiyskoy akademii nauk (Bulletin of the Russian Academy of Sciences) 79(11), 1006–1011 (2009). (in Russian) 4. Petrovsky, A.B., Roizenzon, G.V.: Mnogokriterial’niy vybor s umen’sheniem razmernosti prostranstva priznakov: mnogoetapnaya tekhnologiya PAKS (Multiple criteria choice with reducing dimension of attribute space: multi-stage technology PAKS). Iskusstvenniy intellekt i prinyatie resheniy (Artificial Intelligence and Decision Making) 4, 88–103 (2012). (in Russian) 5. Petrovsky, A.B., Royzenson, G.V.: Multi-stage technique ‘PAKS’ for multiple criteria decision aiding. Int. J. Inf. Technol. Decis. Mak. 12(5), 1055–1071 (2013)

References

211

6. Lobanov, V.N., Petrovsky, A.B.: Vybor vychislitel’nogo klastera, osnovanniy na agregirovanii mnogikh kriteriev (Choice of computing cluster based on the aggregation of many criteria). Voprosy radioelektroniki, seriya “Elektronnaya vychislitel’naya tekhnika” (Issues of Radio Electronics, Series “Electronic computers”), 2, 39–54 (in Russian) (2013) 7. Petrovsky, A.B., Lobanov, V.N.: Mnogokriterial’niy vybor slozhnoy tekhnicheskoy sistemy po agregirovannym pokazatelyam (Multicriteria choice of a complex technical system based on aggregated indicators). Vestnik Rostovskogo gosudarstvennogo universiteta putey soobshcheniya (Bulletin of the Rostov State Transport University) 3, 79–85 (2013). (in Russian) 8. Petrovsky, A.B., Lobanov, V.N.: Selection of complex system in the reduced multiple criteria space. World Appl. Sci. J. 29(10), 1315–1319 (2014) 9. Petrovsky, A.B., Lobanov, V.N.: Multi-criteria choice in the attribute space of large dimension: Multi-method technology PAKS-M. Sci. Tech. Inf. Process. 42(5), 76–86 (2015) 10. Petrovsky, A.B., Roizenzon, G.V., Tikhonov, I.P., Balyshev, A.V., Yakovlev, E.N.: Mnogokriterial’naya ekspertnaya otsenka i analiz effektivnosti deyatel’nosti nauchnykh organizatsiy (Multicriteria expert assessment and analysis of the effectiveness of scientific organizations). In: Sistemniy analiz i informatsionnye tekhnologii: Trudy chetvertoy mezhdunarodnoy konferentsii (System analysis and information technologies: Proceedings of the fourth international conference), vol. 1, pp. 155–159. Publishing house of the Chelyabinsk State University, Chelyabinsk (in Russian) (2011) 11. Kto v Rossii samiy intellektual’niy? Reyting vedushchikh rossiyskikh razrabotchikov vysokikh tekhnologiy (Who is the most intelligent in Russia? Rating of the leading Russian developers of high technologies). Kompaniya (Company) 47(143), 38–39 (in Russian) (2000) 12. Petrovsky, A.B.: Multiple criteria ranking enterprises based on inconsistent estimations. Information systems technology and its applications. Lecture notes in Informatics, vol. 84, pp. 143–151. Gesellschaft für Informatik, Bonn (2006)

Chapter 9

Mathematical Tools

This chapter is intended for those inquisitive readers who wish to learn deeper mathematical tools for group verbal decision analysis. A convenient mathematical model for representing objects that are described by many numerical and verbal attributes is a multiset, or a set with repetitions. This chapter provides a brief description of multiset theory. We discuss the concept of multiset; define operations on multisets; introduce families of sets and multisets, a set measure and a multiset measure, metric spaces of multisets.

9.1 Concept of Multiset Let us present the basic concepts of multiset theory, following the works [1–7]. The concept of a set is considered as the basis of modern mathematics. A set is a collection of certain entities called elements. Elements that constitute a set can be arbitrary in nature: points, numbers, symbols, objects, images, figures, words, terms, features and a lot of others. In classical set theory, it is implicitly postulated that elements of a set are distinct and differ from each other. We denote elements of sets by lowercase letters a, b, …; sets by capital letters A, B, …; families of sets, elements of which are sets, by capital block letters A, B , … Sets can be discrete and continuous, finite and infinite (countable, uncountable). The finite set is specified by some rules for its formation or simply by a list of its elements, which are written in arbitrary order and are separated by commas. Examples of sets: K = {a, b, c},

L = {b, d, e, g},

M = {a, c, e, g, d, b}.

If we give up the condition of distinguishability of set elements and admit the possibility of multiple presence of elements (existence of several elements of the same kinds or identical exemplars, copies, versions of element), we get a multiset. A number of element occurrences in a multiset, called a multiplicity of element, is an © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. B. Petrovsky, Group Verbal Decision Analysis, Studies in Systems, Decision and Control 451, https://doi.org/10.1007/978-3-031-16941-0_9

213

214

9 Mathematical Tools

essential feature that makes a multiset a qualitatively new mathematical concept. We denote elements of multisets by lowercase letters a, b,…; multisets by bold capital letters A, B,…; families of multisets, elements of which are multisets, by bold capital block letters A, B ,… An order of elements in a multiset, as well as in a set, is insignificant. Examples of multisets: K = {a, b, a, a, c, c},

L = {b, d, b, d, b, d, b, e, g, e},

M = {a, a, a, c, c, g, e, e, d, d, d, b, b, b, b, b}. The idea of repeatability or multiplicity of mathematical objects goes back to works of Marius Nizolius (1498–1576), who used the notion of ‘multitudo’ [8]; K. Weierstrass (1815–1897), R. Dedekind (1831–1916), G. Cantor (1845–1918) on the theory of real numbers [9, 10]; J. Boole (1810–1880), C. Pearce (1839–1914) on the algebra of logic [11, 12]. Well, algebraic equations can have several identical roots. One and the same point in the range of function can be a multiple mapping of points from the domain of function. Any real number written in the decimal numeral notation can be represented as a sum of terms, each of which cosists of several repeating elements. For example, the number π = 3.14 … is the sum of three elements 1 = 100 , one element 1/10 = 10–1 , four elements 1/100 = 10–2 , and so on. D. Knuth was, apparently, the first to focus the attention that a multiset must be considered as an independent mathematical object, distinguished from a set. “Although multisets appear frequently in mathematics, they often must be treated rather clumsily because there is currently no standard way to treat sets with repeated elements. Several mathematicians have voiced their belief that the lack of adequate terminology and notation for this common concept has been a definite handicap to the development of mathematics… Finally it became clear that such an important concept deserves a name of its own, and the word ‘multiset’ was coined by de Bruijn” [2, P. 694]. One of the key notions of set theory—a membership of element x to a set A (denoted as x ∈ A)—is specified by a characteristic function of set χ A : U → Z01 = {0, 1} that is defined on some universal or ground set U and taking values χ A (x) / A. Unlike a set, each element may occur in a = 1 if x ∈ A, and χ A (x) = 0 if x ∈ multiset more than once. In multiset theory, a membership of x to a multiset A with multiplicity k (we denote it as x ∈ k A) is specified by a multiplicity or counting function of multiset k A : X → Z+ = {0, 1, 2, 3,…} that is defined on the set X and taking values k A (x) = k on the set Z+ of nonnegative integers if there are k exemplars of element x in A, and k A (x) = 0 if there is no exemplar of element x in A. If all multisets of a family A consist of elements of the same set X, then the set X is called a generic set or domain for a family A. Any non-empty set, including the ground set U, can act as a generic set X. Regardless of finiteness of domain X, multisets generated by X, can be both finite and infinite due to countability of Z+ . So, a multiset A generated by an ordinary (crisp) set X = {x 1 , x 2 ,…}, all elements x i of which are different, is defined as a collection of groups of identical elements and is written into the form

9.1 Concept of Multiset

215

A = {k A (x) ◦ x|x ∈ X, k A (x) ∈ Z+ }.

(9.1)

Here the symbol z marks a multiplicity of occurrences of element x ∈ X in a multiset A. A multiset generalizes the concept of a set and becomes a set for k A (x) = χ A (x). The group k A (x) ◦ x = x k A(x) , combuing k identical elements x, will be called a component of multiset A. Thus, a multiset can be considered as a set of different elements, each of which is repeated with a certain multiplicity k, and as a set of different components, each of which includes k indistinguishable exemplars of the same element. The above multisets can be written as follows: K = {3 ◦ a, 1 ◦ b, 2 ◦ c}, L = {4 ◦ b, 3 ◦ d, 2 ◦ e, 1 ◦ g}, M = {3 ◦ a, 2 ◦ c, 1 ◦ g, 2 ◦ e, 3 ◦ d, 5 ◦ b}; } { } { } { K = a 3 , b1 , c2 , L = b4 , d 3 , e2 , g 1 , M = a 3 , c2 , g 1 , e2 , d 3 , b5 . { } The single-element set {x} = {1 ◦ x} ={ x}1 is called a simple set or singleton, the single-component multiset {k ◦ x} = x k is called a simple multiset, multiple singleton or multiton. The null-element set {0zx} = {x 0 } we shall calle a zeron. Introduction of the concept of element multiplicity makes it possible to abandon / A or multiset x ∈ / A, which is the concept of non-belonging of an element to a set x ∈ based on the postulates of two-valued logic. Instead, it is convenient to assume that an element belongs to a set or multiset with null multiplicity and write x ∈0 A, x 0 ∈ A or x ∈ 0 A, x 0 ∈ A. The notation x ∈ A, x ∈ 1 A or x 1 ∈ A indicates that there is only one element x in a set A. The notation x ∈ k A or x k ∈ A indicates that a multiset A contains k ≥ 0 copies of element x. Thus, elements with different multiplicities are “equalized in rights”. This circumstance allows us to say that any set and multiset always contains all elements of the universal set U with various multiplicities, including those with null multiplicity. So, in many cases, it is advisable to use the following unified form of notation for a set and a multiset: A = {χ A (x) ◦ x|χ A : U → {0, 1}, ∀x ∈ U },

(9.2)

A = {k A (x) ◦ x|k A : X → {0, 1, 2, 3, . . .}, ∀x ∈ X ⊆ U }.

(9.3)

Then the universal set U itself is a collection of elements, where all elements have unit multiplicity: U = {kU (x) ◦ x|kU (x) ≡ 1, ∀x ∈ U } = {1 ◦ x|∀x ∈ U }. The empty set or multiset ∅ is a collection of elements, where all elements have null multiplicity:

216

9 Mathematical Tools

∅ = {k∅ (x) ◦ x|k∅ (x) ≡ 0, ∀x ∈ U } = {0 ◦ x|∀x ∈ U }. The above sets and multisets over a domain U = {a, b, c, d, e, f , g} are written in the unified notation form as follows: U = {1 ◦ a, 1 ◦ b, 1 ◦ c, 1 ◦ d, 1 ◦ e, 1 ◦ f, 1 ◦ g}, ∅ = {0 ◦ a, 0 ◦ b, 0 ◦ c, 0 ◦ d, 0 ◦ e, 0 ◦ f, 0 ◦ g}, {a} = {1 ◦ a, 0 ◦ b, 0 ◦ c, 0 ◦ d, 0 ◦ e, 0 ◦ f, 0 ◦ g}, {2 ◦ b} = {0 ◦ a, 2 ◦ b, 0 ◦ c, 0 ◦ d, 0 ◦ e, 0 ◦ f, 0 ◦ g}, K = {1 ◦ a, 1 ◦ b, 1 ◦ c, 0 ◦ d, 0 ◦ e, 0 ◦ f, 0 ◦ g}, L = {0 ◦ a, 1 ◦ b, 0 ◦ c, 1 ◦ d, 1 ◦ e, 0 ◦ f, 1 ◦ g}, M = {1 ◦ a, 1 ◦ b, 1 ◦ c, 1 ◦ d, 1◦, e, 0 ◦ f, l ◦ g}, K = {3 ◦ a, 1 ◦ b, 2 ◦ c, 0 ◦ d, 0 ◦ e, 0 ◦ f, 0 ◦ g}, L = {0 ◦ a, 4 ◦ b, 0 ◦ c, 3 ◦ d, 2 ◦ e, 0 ◦ f, 1 ◦ g}, M = {3 ◦ a, 5 ◦ b, 2 ◦ c, 3 ◦ d, 2 ◦ e, 0 ◦ f, 1 ◦ g}. The unified notation forms (9.2) and (9.3), despite their seeming cumbersomeness, significantly simplify a comparison of sets and multisets, and operations on them, allowing us to perfom operations simultaneously on columns in parallel. The cardinality of a multiset A over a finite set X = {x 1 ,…, x n } is defined as the total number of all elements of multiset, equal to a sum of their multiplicities: card A = | A| =

.

k A (xi ) =

xi ∈X

n .

k A (xi ) =

i=1

.

k A (xi ) = m.

(9.4)

i∈I A

The dimension of a multiset A over a finite set X = {x 1 ,…,x n } is defined as the total number all distinct elements, equal to a sum of their unit multiplicities: dim A = / A/ =

. xi ∈X

χ A (xi ) =

n . i=1

χ A (xi ) =

.

χ A (xi ) = n.

(9.5)

i∈I A

Here I A is a set of indices of elements x i ∈ k A. The dimension of a multiset does not exceed its cardinality /A/ ≤ |A|. The support set, carrier or root of multiset A{ is a set consisting of single copies of } all distinct elements of multiset A : Supp A = x ∈ A|χSupp A (x) = min[k A (x), 1] . The support set of multiset A is a subset of domain X (SuppA ⊆ X). The cardinality of a support set is equal to dimension of multiset (|Supp A| = / A/) and does not exceed the cardinality of domain (|Supp A| ≤ |X |). Different multisets can have the same or different support sets. The maximum value of multiplicity function of multiset A is called a height or peak value: alt A = maxx∈X k A (x) = k A (x ∗ ) = k ∗A , and the element x A * ∈ k* A, for

9.1 Concept of Multiset

217

which a multiplicity function is maximal, is called a peak or mode of (a multiset A. ) ( ∗) ∗ ∗ A multiset may contain several peaks x i *, x j *, for which k A xi = k A x j = k A . For instance, values of multiplicity functions of the multisets K, L, M over the set U = {a, b, c, d, e, f , g} are equal, respectively, to: k K (a) = 3, k K (b) = 1, k K (c) = 2, k K (d) = 0, k K (e) = 0, k K ( f ) = 0, k K (g) = 0; k L (a) = 0, k L (b) = 4, k L (c) = 0, k L (d) = 3, k L (e) = 2, k L ( f ) = 0, k L (g) = 1; k M (a) = 3, k M (b) = 5, k M (c) = 2, k M (d) = 3, k M (e) = 2, k M ( f ) = 0, k M (g) = 1. Cardinalities, dimensions, support sets, heights and peaks of the multisets K, L, M are: |K | = 6; /K / = 3 ; Supp K = {a, b, c}; alt K = 3; x ∗K = a; |L| = 10; /L/ = 4; Supp L = {b, d, e, g}; alt L = 4; x L∗ = b; |M| = 16; /M/ = 4; Supp M = {a, b, c, d, e, g}; alt M = 5; x M ∗ = b. Presence of several main characteristics of multisets determines an existence of a greater variety of their types and properties than that of sets [4, 6, 7]. Multisets A and B are called equal ( A = B) when k A (x) = k B (x), ∀x ∈ X. Multisets A and B will be unequal (A /= B) when k A (x) /= k B (x) for at least one x ∈ X. For equal multisets, we have | A| = |B|, / A/ = /B/, Supp A = Supp B, alt A = alt B, x ∗A = x ∗B . Multisets A and B will be called equicardinal if |A| = |B|; equidimensional if /A/ = /B/; equivalued if they are equicardinal and equidimensional. Equal multisets are equivalued, the converse statement, generally speaking, is not true. The equality of multisets is an equivalence relation, since it is reflexive (A = A), symmetric (A = B, B = A) and transitive ( A = B, B = C ⇒ A = C). A multiset A is said to be included in a multiset B (A ⊆ B) when k A (x) ≤ k B (x), ∀x ∈ X. A multiset A is then named a submultiset or multisubset of a multiset B, and a multiset B is named an overmultiset or parent multiset of a multiset A. In this case |A| ≤ |B|, /A/ ≤ /B/, SuppA ⊆ SuppB, altA ≤ altB, and x A * = x B *, or x A * /= x B *. As in the case of sets, a simultaneous fulfillment of the conditions A ⊆ B and B ⊆ A implies the equality of multisets A = B. If A ⊆ B, but B /⊂ A, then a multiset A is named a proper submultiset of a multiset B and denoted by A ⊂ B. The inclusion of multiset is a preorder relation, since it is reflexive (A = A) and transitive (A = B, B = C ⇒ A = C). Multisets A and B are said to be similarly-named equivalent or S-equivalent (A . B) if their support sets coincide SuppA = SuppB, and there exists a one–one mapping f between multiplicities of multiset elements of the same name: k B (x) = f (k A (x)), x ∈ X. Multisets A and B are said to be differently-named equivalent or D-equivalent ( A . B) if their support sets are equivalent SuppA ~ SuppB or equal, and there exists a one–one mapping multiplicities of multiset elements of )) ( f (between the different names: k B (xi ) = f k A x j , xi , x j ∈ X . Here f is an integer-valued function with the range Z+ .

218

9 Mathematical Tools

S- and D-equivalent multisets are equidimensional /B/ = /A/, their heights are connected as altB = f (altA). Special cases of S-equivalency of multisets are equal multisets; shifted multisets, for which k B (x) = k A (x) + s; stretched or proportional multisets, for which k B (x) = tk A (x), s ≥ 0, t ≥ 1 are integers. A special case of D-equivalency of multisets is equicomposed multisets with equal components of the ( ) different names k A (xi ) = k B x j , xi , x j ∈ X . Equal multisets are equicomposed, whereas the converse is not true. One of two S-equivalent multisets is always a submultiset of the other. An analogicals statement does not hold for D-equivalent multisets. A multiset Z will be called maximal if all multisets A of multiset family A are submultisets of a multiset Z and k A (x) ≤ k Z (x); constant N [h] if k N[h] (x) = h = const, h ∈ Z+ , ∀x ∈ X. Then the empty multiset ∅ is a constant multiset N [0] of height 0, an ordinal set A, in particular a support set SuppA of multiset A, is a constant multiset N [1] of height 1. Any constant multiset N [h] is a multiset shifted by h−1 units, or stretched h times with respect to its support set SuppN [h] . The maximal multiset Z can also be considered a constant multiset N [k] of height k = k Z (x) = max A∈A k A (x), ∀x ∈ X . At the same time, giving a formal definition of the multiplicity function k Z (x) can be a completely non-trivial problem. If the multiplicity function k A takes only two values 0 or 1, then a multiset A becomes a set A with the characteristic function χ A . Then the conditions of equality, inequality and inclusion of multisets will coincide with the same conditions for sets. S- and D-equivalent multisets will simply become equivalent sets. The empty set and the empty multiset will be the same, and the maximal multiset Z will transform into the universal set U. In sets, there are no analogs to the shifted, stretched and constant multisets.

9.2 Operations on Multisets Further, unless otherwise specified, we shall assume that all the multisets under consideration are submultisets of the same maximal multiset Z generated by domain X. Define the following operations on multisets [2–7, 13, 14]. The union of multisets A and B is called a multiset consisting of elements of both multisets, where a multiplicity of each element is equal to its maximum multiplicity in the combined multisets A ∪ B = {k A∪B (x) ◦ x|k A∪B (x) = max[k A (x), k B (x)], ∀x ∈ X }. The union of multisets of family A= { Ai }, i ∈ I is called a multiset . i∈I

. . Ai = kU Ai (x) ◦ x|kU Ai (x) = max k Ai (x), ∀x ∈ X . i

i

i∈I

(9.6)

9.2 Operations on Multisets

219

Here and below I is a set of indices. The carrier of the union of multisets is determined by a rule: ( Supp

.

) =

Ai

i∈I

.

(Supp Ai ).

i∈I

The intersection of multisets A and B is called a multiset consisting of elements of both multisets, where a multiplicity of each element is equal to its minimum multiplicity in the intersected multisets A ∩ B = {k A∩B (x) ◦ x|k A∩B (x) = min[k A (x), k B (x)], ∀x ∈ X }.

(9.7)

The intersection of multisets of family A = {Ai }, i ∈ I is called a multiset .

. . . . Ai = k Ai (x) ◦ x|k Ai (x) = min k Ai (x), ∀x ∈ X . i

i∈I

i=I

i

The carrier of the intersection of multisets is determined by a rule: ( Supp

.

) =

Ai

i∈I

.

(Supp Ai ).

i∈I

The arithmetic sum of multisets A and B is called a multiset consisting of elements of both multisets, where a multiplicity of each element is equal to the smallest of two numbers: a sum of its multiplicities in the added multisets or a multiplicity in the maximal multiset Z A + B = {k A+B (x) ◦ x|k A+B (x) = min[k A (x) + k B (x), k Z (x),]∀x ∈ X }. (9.8) The arithmetic sum of multisets of family A = {Ai }, i ∈ I is called a multiset . i∈I

.

[

Ai = k. Ai (x) ◦ x|k. Ai (x) = min i

i

.

]

.

k Ai (x), k Z (x) , ∀x ∈ X .

i∈I

The carrier of the arithmetic sum of multisets is determined by a rule: Supp

( . i∈I

) Ai

=

.

(Supp Ai ).

i∈I

The arithmetic difference of multisets A and B is called a multiset consisting of elements of both multisets, where a multiplicity of each element is equal to the largest of two numbers: a difference of its multiplicities in the subtracted multisets

220

9 Mathematical Tools

or a multiplicity in the empty multiset ∅ A − B = {k A−B (x) ◦ x|k A−B (x) = max[k A (x) − k B (x), 0], ∀x ∈ X }.

(9.9)

As for sets, the arithmetic difference of multisets is defined for only two multisets. Moreover, if A − B = ∅, then A ⊆ B. The converse is also true: if A − B = ∅, then A ⊆ B. A multiplicity of element of the multiset difference A − B can also be written as follows: k A − B (x) = k A (x) − k A∩B (x). The carrier of the arithmetic difference of multisets and the difference of carriers of the subtracted multisets are connected by a relation: (Supp A)\(Supp B) ⊆ Supp( A − B). The symmetric difference of multisets A and B is called a multiset consisting of elements of both multisets, where a multiplicity of each element is equal to the modulus of a difference of its multiplicities in the subtracted multisets AΔ B = {k AΔ B (x) ◦ x|k AΔ B (x) =|k A (x) − k B (x)|, ∀x ∈ X }.

(9.10)

As for sets, the symmetric difference of multisets is defined for only two multisets. A multiplicity of element of the multiset symmetric difference AΔ B can also be written as follows: k AΔ B (x) = max[k A (x), k B (x)] − min[k A (x), k B (x)]. The carrier of the symmetric difference of multisets and the difference of carriers of the subtracted multisets are connected by a relation: (Supp A)Δ (Supp B) ⊆ Supp( AΔ B). The complement of multiset A to the maximal multiset Z is called a multiset consisting of all elements of domain X, where a multiplicity of each element is equal to the difference of its multiplicities in the maximal multiset Z and the complemented multiset A } { A = Z − A = k A (x) ◦ x|k A (x) = k Z (x) − k A (x), ∀x ∈ X .

(9.11)

Carriers of multiset and the complement of it can be either the same or different sets. Besides, the carrier of the multiset complement usually does not coinside not with the complement of the multiset carrier. The following relation is valid: Supp A ⊆ Supp A. The arithmetic product of multisets A and B is called a multiset consisting of elements of both multisets, where a multiplicity of each element is equal to the

9.2 Operations on Multisets

221

smallest of two numbers: a product of its multiplicities in the multiplied multisets or a multiplicity in the maximal multiset Z A • B = {k A•B (x) ◦ x|k A•B (x) = min[k A (x) · k B (x), k Z (x)], ∀x ∈ X }.

(9.12)

The arithmetic product of multisets of family A = {Ai }, i ∈ I is called a multiset . i∈I

. Ai = k

.

Ai (x)

◦ x|k

.

i

Ai (x)

i

= min

[ .

]

.

k Ai (x), k Z (x) , ∀x ∈ X .

i∈I

The arithmetic n-th power of multiset A is called a multiset that is the arithmetic product of n identical multisets A { [ ] } A = k An (x) ◦ |k An (x) = min (k A (x))n , k Z (x) , ∀x ∈ X k A (x).

(9.13)

Carriers of the arithmetic product of multisets and the arithmetic n-th power of multiset are determined by rules: Supp

( . i∈I

) Ai

.

(Supp Ai ), Supp An = Supp A.

i∈I

The reproduction of multiset A or a product of multiset A and an integer b is called a multiset consisting of all elements of multiset A, where the multiplicity of each element is increased by b times b • A = {kb· A (x) ◦ x|kb• A (x) = b • k A (x), b ∈ Z+ , ∀x ∈ X }.

(9.14)

Carriers of multiset A and its reproduction b·A coincide: Supp(b • A) = Supp A. The multiset reproduction combines features of arithmetic addition and arithmetic multiplication of multisets. Indeed, the reproduction of multiset can be represented as a sum of b identical multisets A: b•A = A + … + A (b times) and as a product of a constant multiset N [h] of height b and multiset A: b•A = N [b] •A. The direct product of> multisets A and B is called a multiset consisting of ordered < pairs of elements xi , x j , where the element x i is an element of the first factor< A, the> element x j is an element of the second factor B, and a multiplicity of each pair xi , x j is equal to the smallest of two numbers: a product of multiplicities of elements x i and x j in the multiplied multisets or a product of multiplicities of elements x i and x j in the maximal multiset Z ( ) < > ( ) { A × B = k A×B xi , x j ◦ xi , x j |k A×B xi , x j

222

9 Mathematical Tools

( ) ( )] [ } = min k A (xi ) · k B x j , k Z (xi ) · k Z x j , xi ∈ A, x j ∈ B .

(9.15)

The direct product of multisets of family < > A = {Ai }, i ∈ I is called a multiset consisting of n-element tuples x p1 , . . . , x p , where the i-th element x pi is an element of the i-th factor Ai , and a multiplicity of each tuple is equal to the smallest of two numbers: a product of multiplicities of elements x p1 ,…,x pp in the multiplied multisets or a product of multiplicities of elements x p1 ,…,x pp in the maximal multiset Z ( ) < > ( ) { A1 × . . . × An = k A1 ×...× An x p1 , . . . , x pn ◦ x p1 , . . . , x pn |k A1 ×...× An x p1 , . . . , x pn ⎫ ⎤ ⎡ n n ⎬ . ( ) . ( ) k Ai x pi , k Z x pi ⎦, x pi ∈ Ai , i = 1, . . . , n. . = min⎣ ⎭ i=1

(9.16)

i=1

A multiplicity function of the direct product of n multisets is an n-ary function of the arguments x p1 ,…,x pp , which defines the mapping: k A1 ×...× An : X × … × X → (Z + )p . The direct n-th power of a multiset A is called a multiset that is the direct product of n identical multisets A { (× A)n = k(× A)n (x1 , . . . , xn ) ◦ |k(× A)n (x1 , . . . , xn ) . ] [ n n . . k A (xi ), k Z (xi ) , xi ∈ A . = min i=1

i=1

Carriers of the direct product of multisets and the direct n-th power of a multiset are determined by rules: Supp( A1 × . . . × An ) = (Supp A1 ) × . . . × (Supp An ), Supp(× A)n = (Supp A)n .

As an example, we present the results of operations on the multisets K = {3za, 1zb, 2zc, 0zd, 0ze, 0zf , 0zg}, L = {0za, 4zb, 0zc, 3zd, 2ze, 0zf , 1zg} with the maximum multiset Z = {15za, 15zb, 15zc, 15zd, 15ze, 15zf , 15zg} generated by the domain U = {a, b, c, d, e, f , g}. K ∪ L = {3 ◦ a, 4 ◦ b, 2 ◦ c, 3 ◦ d, 2 ◦ e, 0 ◦ f, 1 ◦ g}, Supp(K ∪ L) = {a, b, c, d, e, g}; K ∩ L = {0 ◦ a, 1 ◦ b, 0 ◦ c, 0 ◦ d, 0 ◦ e, 0 ◦ f, 0 ◦ g}, Supp(K ∩ L) = {b}; K + L = {3 ◦ a, 5 ◦ b, 2 ◦ c, 3 ◦ d, 2 ◦ e, 0 ◦ f, 1 ◦ g}, Supp(K + L) = {a, b, c, d, e, g}; K − L = {3 ◦ a, 0 ◦ b, 2 ◦ c, 0 ◦ d, 0 ◦ e, 0 ◦ f, 0 ◦ g}, Supp(K − L) = {a, c};

9.2 Operations on Multisets

223

L − K = {0 ◦ a, 3 ◦ b, 0 ◦ c, 3 ◦ d, 2 ◦ e, 0 ◦ f, 1 ◦ g}, Supp(L − K ) = {b, d, e, g}; K ∪ L = {3 ◦ a, 3◦ b, 2 ◦ c, 3 ◦ d, 2 ◦ e, 0 ◦ f, 1 ◦ g}, Supp(K ∪ L) = {a, b, c, d, e, g}; K = {12 ◦ a, 14 ◦ b, 13 ◦ c, 12 ◦ d, 13 ◦ e, 15 ◦ f, 14 ◦ g}, SuppK = {a, b, c, d, e, f, g}; 4 • K = {12 ◦ a, 4 ◦ b, 8 ◦ c, 0 ◦ d, 0 ◦ e, 0 ◦ f, 0 ◦ g}, Supp(4 • K ) = {a, b, c}; K • L = {0 ◦ a, 4 ◦ b, 0 ◦ c, 0 ◦ d, 0 ◦ e, 0 ◦ f, 0 ◦ g}, Supp(K • L) = {b}; K × L = {12 ◦ , 9 ◦ , 6 ◦ , 3 ◦ , 4 ◦ , 3 ◦ , 2 ◦ , 1 ◦ , 8 ◦ , 6 ◦ , 4 ◦ , 2 ◦ } Supp(K × L) = {, , , , , , , , , , , }. Analogs of the operations on multisets—arithmetic addition, arithmetic multiplication, multiplication by a scalar—can be the componentwise addition and multiplication by a scalar of vectors a + b = (a1 + b1 ,…, an + bn ), h·a = (ha1 ,…, han ); the||elementwise || || by ||a scalar of matrices A+B || addition, multiplication, || multiplication = ||ai j + bi j ||m×n , A · B = ||ai j · bi j ||m×n , h · A = ||h · ai j ||m×n . The second to last operation differs from the traditional operation of matrix multiplication [15]. Many of properties of the operations on multisets are similar to those of the operations on ordinary sets. Namely, idempotency of the union and intersection A ∪ . . .∪ A = A, A∩. . .∩ A = A, involution (double negation) of the complement A = A, identity, commutativity, associativity, distributiveness of some operations. As in the case of sets, not all the operations on multisets are mutually commutative, associative and distributive. Besides, the operations of addition, multiplication, raising to an arithmetic power, and multiplication by a scalar, are not defined for sets. In going from multisets to sets, the operations of multiplication and raising to an arithmetic power transforn into the intersection, and the operations of addition and multiplications by a scalar become unrealizable. For multisets, as well as for sets, there is a duality of the union and intersection operations with respect to the complement operation (analogous to de Morgan’s laws), and, in addition, there is a new kind of duality of the arithmetic addition and subtraction operations: A ∪ B = A ∩ B,

A ∩ B = A ∪ B;

A + B = A − B = B − A,

A − B = A + B,

A − B = B − A.

224

9 Mathematical Tools

Some properties of operations, which sets have, are absent for multisets. But new properties appear that have no analogues for sets. So, in set theory, the equalities A ∪ A = U,

A∩ A=∅

always hold. For multisets, in contrast to sets, the relations A + A = Z,

Z− A= A

following from definition (9.15) of the complement of multiset are valid. At the same time, it can be A ∪ A = Z,

A ∩ A = A − A = A − A = ∅,

and A ∪ A /= Z,

A ∩ A /= ∅,

A − A /= ∅,

A − A /= ∅

(in these cases, the inclusions A ⊆ A and A ⊆ A are possible). Moreover, the empty multiset ∅ and the maximal multiset Z mutually complement each other: ∅ = Z, Z = ∅, and hence ∅ ∩ ∅ = Z ∩ Z = ∅. Another specific feature of the operations on multisets of family A = {Ai }, i ∈ I is a possibility of their linear combinations with the operation of multiplication by a scalar (reproduction). These are as follows: the weighted union .

(bi • Ai ) = {kU bi • Ai (x) ◦ x|kU bi • Ai (x) = i

i∈I

i

( ) max bi · k Ai (x) , bi ≥ 1, ∀x ∈ X };

(9.17)

i∈I

the weighted intersection .

(bi • Ai ) = {k. bi • Ai (x) ◦ x|k. bi • Ai (x) = i

i∈I

i

( ) min bi · k Ai (x) , bi ≥ 1, ∀x ∈ X }; i∈I

the weighted arithmetic sum .

(bi • Ai ) = {k. bi • Ai (x) ◦ x|k. bi • Ai (x) i

i∈I

i

[ ] .( ) bi · k Ai (x) , k Z (x) , bi ≥ 1, ∀x ∈ X }; = min i∈I

(9.18)

9.2 Operations on Multisets

225

the weighted arithmetic product .

(bi • Ai ) = {k. bi •Ai (x) ◦ x|k. bi •Ai (x) =

i∈I

min

[ .

i

]

i

bi · kAi (x), k Z (x) , bi ≥ 1, ∀x ∈ X };

i∈I

the weighted direct product ( ) (b1 • A1 ) × . . . × (bn • An ) = {k(b1 • A1 )×..×(bn • An ) x p1 , . . . , x pn =

= min

[ n .

bi .k Ai

(

] n ) . ( ) x pi , k Ai x pi , x pi ∈ Ai , bi ≥ 1, i = 1, . . . , n}

i=1

i=1

For bi = 1, the “weighted” operations transform into the similar “unweighted” operations on multisets. A possibility of introducing and using linear combinations of the operations significantly increases the arsenal of tools for operating with multisets. Multisets have many “algebraic” properties, which are absent in sets. So, any finite A = {k A (x 1 )zx 1 ,…, k A (x p )zx p } or countable A = {k A (x 1 )zx 1 , k A (x 2 )zx 2 ,…} multiset over a domain X = {x 1 , x 2 ,…} can be specified as the sum A=

n . i=1

Ai =

.

{k A (xi ) ◦ xi }

(9.19)

xi ∈X

of pairwise disjoint simple multisets or multitones Ai = {k A (xi ) ◦ xi } = {0, . . . , 0, k A (xi ) ◦ xi , 0, . . .}, i = 1, 2, . . . , Ai ∩ A j = ∅, i /= j. Using definition (9.14) of a multiset reproduction and its additivity property, each multitone Ai can be represented, in turn, by the weighted sum of k Ai = k A (x i ) identical simple sets or singletons {x i }: Ai = {k A (xi ) ◦ xi } = k Ai • {xi } = {xi } + . . . + {xi }(k Ai times ).

(9.20)

Disjoint sets {x i } themselves together constitute a domain X = {x1 , x2 , . . .} = U {xi }. So, any multiset can be considered the weighted arithmetic sum (9.18) of i

pairwise disjoint single-element sets {x i }. In going from multisets to sets and replacing the multiplicity function k A (x) with the characteristic function χ A (x), many of the above statements for multisets will remain valid for sets, but some of them may become indefinable, change, or lose their meaning. In particular, representations (9.19), (9.20) for sets are unrealizable.

226

9 Mathematical Tools

9.3 Families of Sets and Multisets Define some special types of families of sets and multisets [1, 4, 6, 7, 16, 17]. A family of all subsets of set A that includes the set A itself and the empty set ∅ is called a power set or Boolean of set A and is denoted by P (A). The Boolean P (A) is the set of subsets of the set A. A family of all different submultisets of multiset A over a domain X is called a macroset or Boolean of multiset A and is denoted by P (A). Here, the word “different” is important, since theory of multisets allows unlimited repetition of elements in the same multiset and, accordingly, repetition of multisets in the same family of multisets. In a family P (A), different submultisets are always present in single exemplars, including the multiset A itself, which is the maximal multiset for a family P (A), the set SuppA, and the empty multiset ∅. The Boolean P (A) is the set of submultisets of the multiset A. A family of all possible submultisets of multiset A over a domain X is called a power multiset or multiboolean of multiset A and is denoted by Q (A). Here, the word “possible” is important, since contrary to the Boolean P (A), the multiboolean Q (A) is the multiset, in which several identical submultisets of the multiset A are present. The carrier SuppQ (A) of the multibulean Q (A) is the Boolean P (A) of the multiset A. If A ⊆ A, then P (A) ⊆ P (A) ⊆ Q (A). Some of elements of the Booleans P (A), P (A) and multiboolean Q (A) are themselves sub(multi)sets of other (multi)sets of these families. The cardinality of a family of sets and multisets P (A), P (A), Q (A) is determined by the total number of its elements (respectively, subsets of set A and submultisets of multiset A). Thus, the cardinality of the Boolean P (A)—a family of all subsets of n-element set A = {x 1 ,…,x n }—is equal to card P (A) = |P (A)| = 2|A| = 2cardA = 2n . The cardinality of the Boolean P (A)—a family of all different submultisets of n-dimensional multiset A = {k A1 (x1 ) ◦ x1 , . . . , k An (xi ) ◦ xn } over a finite n-element set X = {x 1 ,…,x .nn }—is equal to card P ( A) = |P ( A)| = (1 + k A (x1 )) · . . . · (1 + k A (xn )) = i=1 (1 + k A (xi )). The cardinality of the multiboolean Q (A)—a family of all possible submultisets of n-dimensional multiset A = {k A1 (x1 ) ◦ x1 , . . . , k An (xi ) ◦ xn } over a n-element set X = {x 1 ,…,x n }—is equal to card Q ( A) = | Q( A)| = 2|A| = 2card A = 2m . The capacity of a family of sets and multisets we define as a sum of cardinalities of (multi)sets that form this family. Then, respectively, the|capacity of the Boolean | . . P (A) . of set A is equal to vol P ( A) = ||P (A)|| = A j∈P (A) | A j | = A j∈P ( A) χ A j (x); x

P (A) A. over a set X is equal to vol, the capacity of the Boolean | |of multiset . |Aj| = . P (A) = ||P (A)|| = k A j (x), and the capacity of A j∈P ( A) A j∈P (A) | x | . . . the multiboolean Q (A) is vol ||Q ( A)|| = A∈Q (A) | A j | = A∈Q (A) k A j (x). x

For instance, the Boolean P (K) of the set K = {a, b, c} consists of the sets ∅, {a}, {b}, {c}, {a, b}, {a, c}, {b, c}, {a, b, c}. The cardinality of the Boolean P (K) is equal to |P (K )| = 23 = 8, the capacity is equal to ||P (K )|| = 12. The Boolean

9.3 Families of Sets and Multisets

227

P(∅) of the empty set ∅ consists of the single set ∅. The cardinality of the Boolean P (∅) is equal to |{∅}| = 20 = 1, the capacity is ||{∅}|| = 0. The Boolean P (K) of the multiset K = {3za, 1zb, 2zc} over a domain K = {a, b, c} includes the multisets {0za, 0zb, 0zc} = ∅, {1za, 0zb, 0zc} = {a}, {0za, 1zb, 0zc} = {b}, {0za, 0zb, 1zc} = {c}, {1za, 1zb, 0zc} = {a, b}, {1za, 0zb, 1zc} = {a, c}, {0za, 1zb, 1zc} = {b, c}, {2za, 0zb, 0zc}, {0za, 0zb, 2zc}, {1za, 1zb, 1zc} = {a, b, c} = SuppK, {3za, 0zb, 0zc}, {2za, 1zb, 0zc}, {2za, 0zb, 1zc}, {1za, 0zb, 2zc}, {0za, 1zb, 2zc}, {3za, 1zb, 0zc}, {3za, 0zb, 1zc}, {2za, 0zb, 2zc}, {2za, 1zb, 1zc}, {1za, 1zb, 2zc}, {3za, 0zb, 2zc}, {3za, 1zb, 1zc}, {2za, 1zb, 2zc}, {3za, 1zb, 2zc} = K. The cardinality of the Boolean P (K) is equal to |P (K )| = 4 · 2 · 3 = 24, the capacity is ||P (K )|| = 72. The multibulean Q (K) of the multiset K = {3za, 1zb, 2zc} over a domain K = {a, b, c} contains 1 multiset {0za, 0zb, 0zc} = ∅, 3 multisets {1za, 0zb, 0zc} = {a}, 1 multiset {0za, 1zb, 0zc} = {b}, 2 multisets {0za, 0zb, 1zc} = {c}, 3 multisets {1za, 1zb, 0zc} = {a, b}, 6 multisets {1za, 0zb, 1zc} = {a, c}, 2 multisets {0za, 1zb, 1zc} = {b, c}, 3 multisets {2za, 0zb, 0zc}, 1 multiset {0za, 0zb, 2zc}, 6 multisets {1za, 1zb, 1zc} = {a, b, c} = SuppK, 1 multiset {3za, 0zb, 0zc}, 3 multisets {2za, 1zb, 0zc}, 6 multisets {2za, 0zb, 1zc}, 3 multisets {1za, 0zb, 2zc}, 1 multiset {0za, 1zb, 2zc}, 1 multiset {3za, 1zb, 0zc}, 2 multisets {3za, 0zb, 1zc}, 3 multisets {2za, 0zb, 2zc}, 6 multisets {2za, 1zb, 1zc}, 3 multisets {1za, 1zb, 2zc}, 1 multiset {3za, 0zb, 2zc}, 2 multisets {3za, 1zb, 1zc}, 3 multisets {2za, 1zb, 2zc}, 1 multiset {3za, 1zb, 2zc} = K. The cardinality of the multiboolean Q (K ) is equal to |Q (K )| = 26 = 64, the capacity is ||Q (K )|| = 192. A family A = {Ai }i∈I of multisets generated by domain X such that every element x is included in a multiset Ai at most k times, that is, k Ai (x) ≤ k, k is an integer, we call a k-bounded series or k-bounded repetition of multisets over a domain X and denote by P [k] (X). The maximal multiset for a k-bounded series P [k] (X) is a constant multiset Z[k] with a multiplicity function k Z[k] (x) = k for all x ∈ X. If a domain X is finite, then the cardinality of the maximal multiset Z[k] is |Z[k] | = k·|X|. If a domain X is a countable set, then the maximum multiset Z[k] is also a countable multiset, and its cardinality is equal to the cardinality of the set of natural numbers |Z[k] | = k·|X|=ℵ0 [20]. Thus, the k-bounded series P [k] (X) over a domain X is nothing but the Boolean P (Z[k] ) of multiset Z[k] . In particular, the 0-bounded series P [0] (X) consists of the single empty set {∅}, the 1-bounded series P [1] (X) coincides with the Boolean P (X) of a domain X. The cardinality of the k-bounded series P [k] (X) of multisets over a finite domain X = {x 1 ,…, x n } is equal to n . | | card P[k] (X ) = | P[k] (X )| = (1 + k)n = t=0

( ) n n t . n! kt . k = t!(n − t)! t t=0

For example, the family of real numbers written in the decimal numeral notation and presented as multisets is the 9-bounded series P [9] (X) over the counting domain X = {…, 103 , 102 , 101 , 100 , 10–1 , 10–2 , 10–3 ,…}.

228

9 Mathematical Tools

A family of multisets, in which no restrictions on a number of elements are imposed, will be called an unbounded series P [∞] (X) over a domain X. For the unbounded series P [∞] (X) over a finite or countable domain X, the maximum multiset Z[∞] will be a countable multiset with the cardinality |Z[∞] | = |Z+ |·|X| = ℵ0 . Obviously, any series is a subfamily of higher level series P [0] (X) ⊆ P [1] (X) ⊆ . . . ⊆ P [k] (X) ⊆ . . . ⊆ P [∞] (X). U A family of nonempty multisets A1 ,…,Ar , the union of which is rs=1 . As = A, will be called a covering of multiset A, the intersection of which is .rs=1 As = A, be called an overlapping of multiset A, and the sum of which is rs=1 As = ( A, be called a decomposition of multiset A. If multisets are pairwise disjoint A p ∩ Aq = ∅, p /= q− we call such multisets disjoint), then the covering and the decomposition of multiset A coincide and are called a partition of multiset A [6, 7, 16]. The multisets A1 ,…,Ar , that form covering, overlapping, decomposition, partition of multiset A, we will call blocks, and a number r of blocks is called a rank of covering, overlapping, decomposition, partition. Denote the covering of multiset A of rank r as C |r ( A) = { A1 ; . . . ; As ; . . . ; Ar }, the overlapping of multiset A of rank r as as I |r ( A) = { A1 ; . . . ; As ; . . . ; Ar }, the decomposition of multiset A of rank r as D |r ( A) = { A1 ; . . . ; As ; . . . ; Ar }, and the partition of multiset A of rank r as B |r ( A) = { A1 ; . . . ; As . . . ; Ar }, where blocks are separated by semicolons. If all blocks are submultisets of multiset A, then we talk about partition of multiset A into classes. For the cardinality and capacity of the covering C |r ( A), overlapping I |r ( A), decomposition D |r ( A), and partition B |r ( A) of multiset A, we have: | | | | | | | | |r |C ( A)| = |I |r ( A)| = |D |r ( A)| = |B |r ( A)| = r, . || || || |r || || || || || ||C ( A)|| ≥ | A|, ||I |r ( A)|| ≥ | A|, ||D |r ( A)|| = ||B |r ( A)|| = | A| Coverings and partitions of set, coverings, overlappings, decompositions and partitions of multiset can be ordered and unordered, depending on the order of their blocks. For instance, a family of zerons {{0zx 1 };…; {0zx s };…; {0zx n }} is the partition B |n (∅) of the empty (multi)set ∅, a family of singletons {{1zx 1 };…; {1zx s };…; {1zx n }} is the partition B |n (X ) of set X = {x1 , . . . , xn }, a family of multitones {{k A (x1 ) ◦ x1 }; . . . ; {k A (xs ) ◦ xs }; . . . ; {k A (xn ) ◦ xn }} is the partition B |n ( A) of multiset A = {k A (x1 ) ◦ x1 , . . . , k A (xs ) ◦ xs }, . . . , k A (xn ) ◦ xn } with the number of blocks r = |X | = n. The Boolean P (A) of set A is the covering C |r (A) of set A with the number of blocks r = |P ( A)| = 2|A| . The Boolean P (A) of n-dimensional multiset A = {k A (x1 ) ◦ x1 , . . . , k A (xn ) ◦ xn } over a finite set X = {x 1 ,…,x .nn } is the covering C |r (A) of multiset A with the number of blocks r = |P ( A)| = i=1 (1 + k A (xi )). A family of multisets over a set X = {x1 , x2,... } will be called a semiring E of multisets if a family contains the empty multiset ∅, the intersection Ai ∩ AUj of multisets Ai , A j ∈ E , and any multiset A ∈ E is representable as a finite union rs=1 As = A of disjoint submultisets As ⊆ A;

9.4 Graphical Representations of Multisets

229

a ring K of multisets if a family contains the empty multiset ∅, the union Ai ∪ A j , sum Ai + A j and difference Ai − A j of multisets Ai , A j ∈ K ; an algebra S of multisets if a family is a ring that includes the maximal set, named the unit of algebra, and is closed under finite unions, additions and complements of multisets; U∞ σ -ring K σ of multisets .∞ if a family is a ring and contains a countable union s=1 As and countable sum s=1 As of multisets As ∈ K σ ; .∞δ-ring K δ of multisets if a family is a ring and contains a countable intersection s=1 As of multisets As ∈ K δ ; σ -algebra S σ of multisets if a family is a σ -ring, contains the unit of algebra, and is closed under countable unions, additions and complements of multisets. In particular, every ring of sets/multisets is a semiring of sets/multisets, every σ-algebra of sets/multisets is δ-algebra of set/multisets, and vice versa. The Boolean P (A) of set A is the minimal algebra of subsets of set A, which is the unit of algebra. The Boolean algebra B (A) of subsets of set A ⊆ X is σ-algebra of set X, which is the unit of algebra. The Boolean P (A) of n-dimensional multiset A = {k A (x1 ) ◦ x1 , k A (x2 ) ◦ x2 , . . .} over a finite set X = {x 1 ,…,x n } is the minimal algebra of submultisets of multiset A, which is the unit of algebra. The Boolean algebra B (A) of submultisets of multiset A = {k A (x1 ) ◦ x1 , k A (x2 ) ◦ x2 , . . .} over an infinite set X = {x 1 , x 2 ,…} is σ-algebra of multiset A, which is the unit of algebra. In going to sets, the decomposition of set will be impossible, and the covering, partition, semiring, ring and algebra of multiset become, respectively, the covering, partition, semiring, ring, and algebra of set.

9.4 Graphical Representations of Multisets Let us consider several possible graphical representations of multisets [3, 4, 6, 7]. One of the simplest images of multiset A is the graph of its multiplicity function k A (x), where elements of the universal set U or domain X are located on the abscissa axis, and values of multiplicity function are plotted along the ordinate axis, indicating a number of occurrences of element x in a multiset A. The graph is a histogram (bar chart), the base of which is the carrier SuppA of multiset, and each column specifies a component of multiset A. The histogram width is equal to the dimension /A/ of multiset A or cardinality of the carrier |SuppA|, and the histogram height is a height altA of multiset A. The cardinality |A| of multiset is numerically equal to the area of figure bounded by histogram. Histogram is, in fact, the two-dimensional Euler–Venn diagram, which differ in form from diagrams traditionally used in set theory [18]. Histograms of the multisets K = {3 ◦ a, 1 ◦ b, 2 ◦ c, 0 ◦ d, 0 ◦ e, 0 ◦ f, 0 ◦ g} and L = {0 ◦ a, 4 ◦ b, 0 ◦ c, 3 ◦ d, 2 ◦ e, 0 ◦ f, 1 ◦ g} over the set U = {a, b, c, d, e, f, g} are shown in Fig. 9.1. The histograms of the universal set U and a constant multiset N [h] are rectangles, where bases are the set U, and heights are equal to 1 and h, respectively. The histogram of the empty (multi)set ∅ coincides with the X-axis.

230

9 Mathematical Tools

K

Z+ 5 4 3 2 1 0

a b c d e

f

g U

Z+

5 4 3 2 1 0 a b c d e f

L

g U

Fig. 9.1 Histograms representing multisets K = {3 ◦ a, 1 ◦ b, 2 ◦ c, 0 ◦ d, 0 ◦ e, 0 ◦ f, 0 ◦ g} and L = {0 ◦ a, 4 ◦ b, 0 ◦ c, 3 ◦ d, 2 ◦ e, 0 ◦ f, 1 ◦ g}

Finite multisets over a domain X = {x 1 ,…,x n } can be graphically represented by a bipartite multigraph G(V 1 , V 2 , E) with multiple edges. In the multigraph G, the first subset V 1 of vertices is associated with elements x i ∈ X, i = 1,…,n of domain and the second subset V 2 of vertices is multisets As generated by a domain X. A number of edges ek ∈ E connecting adjacent vertices of multigraph G is equal to a multiplicity k As (x i ) of occurrences of element x i in a multiset As . The Boolean P (A) of multiset A = {k A1 ◦ x1 , . . . , k An ◦ xn } over a set X = {x 1 ,…, x n } can be represented as a hypergraph .(V, W ), where a set of vertices V defines n-dimensional multiset A written as m-element set A = {x 1 ,… (k A1 times),…, x n ,… (k An times)}, m = k A1 + … + k An = |A|, and a family of edges W = .{W s } corresponds to submultisets As ∈ P ( A), s = 1, . . . , w, w = | P( A)| = i (1 + k Ai ). The bipartite multigraph G(V 1 , V 2 , E) and the hypergraph .(V, W ), which represent submultisets G1 –G9 of the multiset G = {1 ◦ a, 2 ◦ b, 3 ◦ c} over the domain X = {a, b, c, d, e, f, g}, are shown in Fig. 9.2. Here G 1 = ∅, G 2 = {1 ◦ a}, G 3 = {1 ◦ a, 2 ◦ b}, G 4 = {1 ◦ a, 1 ◦ b, 1 ◦ c}, G 5 = {2 ◦ b}, G6 = {2zb, 1zc}, G7 = {1zb, 2zc}, G8 = 1zc}, G9 = {1za, 2zb, 3zc}. Taking into consideration the unified notation forms (9.2) and (9.3), we can associate n-dimensional integer vector zs = (zs1 ,…,zsn ), components of which are given by the condition zsi = k As (x i ), x i ∈ X, to any finite set and any finite multiset over a set X = {x 1 ,…,x n }. For example, the universal set U = {a, b, c, d, e, f , g} corresponds to vector (1,1,1,1,1,1,1), the empty (multi)set—vector (0,0,0,0,0,0,0), the singleton {a}— vector (1,0,0,0,0,0,0), the multitone {2zb}—vector (0,2,0,0,0,0,0), the sets K = {a, b, c}, L = {b, d, e, g}, M = {a, b, c, d, e, g}—vectors (1,1,1,0,0,0,0), (0,1,0,1,1,0,1), (1,1,1,1,1,0,1), the multisets K = {3 ◦ a, 1 ◦ b, 2 ◦ c}, L = {4 ◦ b, 3 ◦ d, 2 ◦ e, 1 ◦ g}, M = {3 ◦ a , 5 ◦ b, 2 ◦ c, 3 ◦ d, 2 ◦ e, 1 ◦ g}—vectors (3,1,2,0,0,0,0), (0,4,0,3,2,0,1), (3,5,2,3,2,0,1). The Boolean P (A) of n-dimensional multiset A = {k A1 ◦ x1 , . . . , k An ◦ xn } over a domain X = {x 1 ,…,x n } is equivalent to the family Zn of integer vectors zs of dimension n. These families can be represented by a rectangular incidence matrix Z = ||z si ||w×n , the s-th row of which coincides with the .vector zs , a number w of rows is equal to the cardinality of the Boolean |P ( A)| = i (1 + k Ai ), and a number n of columns is equal to the domain cardinality |X|.

9.4 Graphical Representations of Multisets

G1

V1

G3

G2

231

G4

G5

G9

G6

G7

G8

a

b

c

d

e

f

Г(V1, V2, E)

V2

W7

(V, W)

W1

g

W3

W2 a

W4

c

W9

b

W5 W6

b c

W8 c

Fig. 9.2 Submultisets G1 –G9 of multiset G = {1 ◦ a, 2 ◦ b, 3 ◦ c} represented with bipartite multigraph G and hypergraph .

For example, the family Z3 of integer vectors, which is equivalent to the family P (G) of submultisets of the multiset G = {1za, 2zb, 3zc}, consists of the following vectors zs : (0,0,0) ~ ∅; (1,0,0) ~ {1za}; (0,1,0) ~ {1zb}; (0,2,0) ~ {2zb}; (0,0,1) ~ {1zc}; (0,0,2) ~ {2zc}; (0,0,3) ~ {3zc}; (1,1,0) ~ {1za, 1zb}; (1,2,0) ~ {1za, 2zb}; (1,0,1) ~ {1za, 1zc}; (1,0,2) ~ {1za, 2zc}; (1,0,3) ~ {1za, 3zc}; (0,1,1) ~ {1zb, 1zc}; (0,1,2) ~ {1zb, 2zc}; (0,1,3) ~ {1zb, 3zc}; (0,2,1) ~ {2zb, 1zc}; (0,2,2) ~ {2zb, 2zc}; (0,2,3) ~ {2zb, 3zc}; (1,1,1) ~ {1za, 1zb, 1zc}; (1,1,2) ~ {1za, 1zb, 2zc}; (1,1,3) ~ {1za, 1zb, 3zc}; (1,2,1) ~ {1za, 2zb, 1zc}; (1,2,2) ~ {1za, 2zb, 2zc}; (1,2,3) ~ {1za, 2zb, 3zc}. The matrix Z = ||z si ||w×n of the families Z3 and P (G) composed of vectors zs has the dimension 24 × 3. Remind that a graph Kn called a n-dimensional cube, is defined as the recursive product of graphs Kn = Kn−1 × H2 and K1 = H2 where H2 is a complete graph, both vertices of which are adjacent [19]. By a rule of graph multiplication, the vertices and of the product-graph G = G1 × G2 are adjacent if and only if either the vertices v1 and u1 coincide and the vertices v2 and u2 are adjacent, or the vertices v2 and u2 coincide, and the vertices v1 and u1 are adjacent. The n-dimensional cube Kn is a simple connected graph with 2n vertices, which can be associated with binary vectors bs = (bs1 ,…,bsn ) with components bsi equal to 0 or 1. Vertices of cube Kn are adjacent if vectors bs are different in only one component. By analogy with n-dimensional cube Kn , we define a n-dimensional parallelepiped Ʌn as a simple connected graph, which is the product of graphs Ʌn = Ʌn−1 × Ph n = Ph1 × … × Phn−1 × Phn where Phi , i = 1,…,n is a simple chain consisting of hi adjacent vertices. and li = hi − 1 edges. A number of vertices of n-dimensional parallelepiped .n n h i , and number of edges connecting adjacent vertices is qn = i=1 ( li is pn = i=1

232

9 Mathematical Tools

.

s∈Si h s ) where S i = N n \{i} is a set of indices. Contrary to n-dimensional cube Kn , n-dimensional parallelepiped Ʌn will not be a regular graph in the general. There exists a close connection between the expressions that determine the number of vertices pn of n-dimensional parallelepiped Ʌn and the cardinality of family P (A) of submultisets. Each vertex ps of parallelepiped Ʌn corresponds to one of submultisets As ∈ P ( A) or to one of n-dimensional vectors zs = (z s1 , . . . , z sn ), z si = k As (xi ), s = 1, . . . , w of family Z n . The integer vectors zs and zt corresponding to adjacent vertices of parallelepiped Ʌn differ by unit in only one component. A simple chain representing the i-th side of n-dimensional parallelepiped Ʌn consists of hi = 1 + k Ai adjacent vertices, one of which corresponds to the empty multiset ∅, and other vertices correspond to the components 1zx i ,…,k Ai zx i of multiset A. Then the volume of n-dimensional parallelepiped will be equal.to the total number of vertices of graph Ʌn and, therefore, pn = |P ( A)| = |Z n | = i (1 + k Ai ) = w. So, a family P(A) of submultisets As of n-dimensional multiset A = {k A1 ◦ x1 , ..., k An ◦ xn } and a family Z n of n-dimensional integer vectors zs can also be present graphically as n-dimensional parallelepiped. The parallelepiped Ʌ3 representing the family P (G) of submultisets of the multiset G = {1 ◦ a, 2 ◦ b, 3 ◦ c} and the family Z 3 of vectors is shown in Fig. 9.3. The parallelepiped has p3 = (1 + k G1 ) · (1 + k G2 ) · (1 + k G3 ) = 2 · 3 · 4 = 24 vertices and q3 = k G1 · (1 + k G2 ) · (1 + k G3 ) + (1 + k G1 ) · k G2 · (1 + k G3 )+(1 + k G1 ) · (1 + k G2 ) · k G3 = 1 · 3 · 4 + 2 · 2 · 4 + 2 · 3 · 3 = 46 edges. The cardinality of the Boolean P (G) of the multiset G and the cardinality of the family Z 3 of vectors is equal to p3 = |P (G)| = |Z 3 | = 24. In the figure, for a brevity of notation, we left only values of multiplicity functions of submultisets Gs (components of vectors zs ) and omitted brackets and commas between elements of submultisets (components of vectors).

120 110 100 000

010

121

020

111

101 001

011

122

021

112

102 002

012

123

022

113

103

013

003

Fig. 9.3 Families P (G) and Z 3 represented with three-dimensional parallelepiped Ʌ3

023

9.5 Set Measure and Multiset Measure

233

9.5 Set Measure and Multiset Measure In many branches of mathematics, a special function plays a significant role. This is a measure of set, which is intuitively associated with a “massiveness” of set and depends on a number of its elements [20]. The notion of set measure appears as a generalization of the concepts of segment length, plane figure square, spatial body volume. A non-negative real-valued function m:A → R, defined on family A = {Ai }i∈I of sets, is called an additive or finitely additive measure of set if the equality m

( n .

) =

Ai

i=1

n .

m( Ai )

(9.21)

i=1

holds for any finite collection of pairwise disjoint sets Ai ∩ A j = ∅, i /= j, Ai , A j ∈A, and is called a σ -additive or countably additive measure of set if the equality m

(∞ . i=1

) Ai

=

∞ .

m( Ai )

(9.22)

i=1

holds for any countable collection of pairwise disjoint sets Ai ∈ A. A family A is a semiring E , ring K , σ-ring K σ , or σ-algebra S σ of subsets Ai of some set A. From the identity of the empty set ∅ = ∅ ∪ ∅ and equality (9.21), it follows that m(∅) = m(∅ ∪ ∅) = 2m(∅), and hence the measure of the empty set is m(∅) = 0. Note important features of the set measure, which are resulted from its additivity and generalize some features of the set cardinality: m(A ∪ B) + m(A ∩ B) = m(A) + m(B), m(AΔ B) = m(A\B) + m(B\A) = m(A ∪ B) − 2m(A ∩ B), m(A\B) = m(A) − m(A ∩ B).

(9.23) (9.24) (9.25)

The set measure also has the following properties: monotonicity A ⊆ B ⇒ m(A) ≤ m(B); symmetry m(A) + m(A) = m(U ); continuity lim m(Ai ) = m( lim i→∞

i→∞

Ai ). Any arbitraryUfinite or countable set A = {x 1 ,x 2 ,…} can always be specified as a partition A = i {xi } into pairwise disjoint single-element sets—singletons {x i }, i = 1,2,… Let a measure of singleton {x i } be equal to m({x i }) = wi , 0 ≤ wi < ∞. Then, according to equalities (9.21), (9.22), a function

234

9 Mathematical Tools

m(A) = m

( .

) Ai

=

.

i

m(Ai ) =

.

i

m({xi }) =

i

.

wi χ A (xi ),

(9.26)

i

defined on a family of sets A, is an additive or, respectively, σ-additive. measure of set. Obviously, the measure of the universal set U is equal to m(U) = i wi . If in . expression (9.26) we consider all numbers wi = 1, then m( A) = i χ A (xi ) = |A|. Thus, the set cardinality is a special case of its measure. .∞ The pair (S σ (X),m), where S σ (X) is σ-algebra over a set X = i=1 Ai , m is σ-finite measure defined on a σ-ring K σ of subsets Ai ⊆ X that form a covering of set X, is called a space of measurable sets or space of sets with measure. The union and intersection of a finite number of measurable sets, difference and symmetric difference of two measurable sets are measurable sets [20]. The necessary and sufficient conditions for measurability of set are formulated as follows. A set A belonging to a σ-ring K σ is measurable if and only if there exists a measurable set B ∈ K σ such that the inequality m(AΔ B) < ε holds for any ε > 0. A non-negative real-valued function m:A → R, defined on a family A = {Ai }i∈I of sets, multisets, is called a strongly additive or strongly finitely additive measure of multiset if the equality m(

n .

Ai ) =

i=1

n .

m( Ai )

(9.27)

i=1

holds for any finite collection of multisets Ai ∈ A, and is called a strongly σ -additive or strongly countably additive measure of multiset if the equality m(

∞ .

Ai ) =

i=1

∞ .

m( Ai )

(9.28)

i=1

holds for any countable collection of multisets Ai ∈ A [4, 6, 7, 13]. A family A is a semiring E , ring K , σ-ring K σ , or σ-algebra S σ of submultisets Ai of some multiset A = {k A (x1 ) ◦ x1 , k A (x2 ) ◦ x2 , . . .} over a set X = {x 1 , x 2 ,…}. Such multiset A can be, for example, the maximal multiset Z = {k Z (x1 ) ◦ x1 , k Z (x2 ) ◦ x2 , . . .}, constant multiset Z [k] = {k ◦ x1 , k ◦ x2 , . . .}, ndimensional multiset X = {k X (x1 ) ◦ x1 , . . . , k X (xn ) ◦ xn }. From the identity of the empty multiset ∅ = ∅ + ∅ and equality (9.27), it follows that m(∅) = m(∅ + ∅) = 2m(∅), and hence the measure of the empty multiset is m(∅) = 0. For any finite . or countableUpartition of multiset A, a sum of submultisets coincides n n Ai = i=1 Ai . Then expressions (9.27) and (9.28) for disjoint with their union i=1 multisets Ai ∩ A j = ∅, i /= j, Ai , A j ∈A can be written as m

( n . i=1

) Ai

=

n . i=1

m( Ai ),

(9.29)

9.5 Set Measure and Multiset Measure

m

(∞ .

235

) Ai

=

∞ .

i=1

m( Ai ).

(9.30)

i=1

A measure of multiset satisfying the condition (9.29) will be called weakly additive or weakly finitely additive, and satisfying the condition (9.30) be called weakly σ additive or weakly countably additive. A strongly additive function of multiset is obviously also weakly additive. The converse, generally, is not true. The weak additivity (9.29), (9.30) of multiset function coincides with the finite and countable additivity of set function. For sets, the addition operation is not feasible, and the strong additivity of function is absent. Thus, the additivity of multiset functions is more diverse than the additivity of set functions. Note important features of multiset measure, which are resulted from its additivity, generalize some features of the multiset cardinality and are absent for the set measure: m( A + B) = m( A) + m(B) = m( A ∪ B) + m( A ∩ B), m( AΔ B) = m( A − B) + m(B − A) = m( A ∪ B) − m( A ∩ B), m( A − B) = m( A) − m( A ∩ B),

(9.31) (9.32) (9.33)

The multiset measure also has the following properties: monotonicity m(A) ≤ m(B) ⇔ A ⊆ B; symmetry m( A) + m( A) = m(Z); continuity lim m(Ai ) = m( lim i→∞

i→∞

Ai ); elasticity m(b•A) = bm(A) where b ≥ 1 is an integer. Let a measure of singleton {x i } be equal to m({xi }) = wi , 0 ≤ wi < ∞. Then, according to equalities (9.27), (9.28), a function m( A) = m

( . i

) Ai

=

. i

m( Ai ) =

. i

k A (xi )m({xi }) =

.

wi k A (xi ), (9.34)

i

defined on a family A of multisets, is a strongly additive or, respectively, strongly σ-additive measure. of multiset. If in expression (9.34) we consider all numbers wi = 1, then m( A) = i k A (xi ) = | A|. Thus, the multiset cardinality is a special case of its measure. The pair (S σ (Z),m), where S σ (Z) is σ-algebra over the maximal multiset Z = .∞ i=1 Ai , m is σ-finite measure defined on a σ-ring K σ of submultisets Ai ⊆ Z that form a decomposition of the multiset Z, will be called a space of measurable multisets or space of multisets with measure. The sum, union and intersection of a finite number of measurable multisets, difference and symmetric difference of two measurable multisets, reproduction of measurable multiset are measurable multisets [6, 7].

236

9 Mathematical Tools

The necessary and sufficient conditions for measurability of multiset are formulated as follows. A multiset A belonging to a σ-ring K σ is measurable if and only if there exists a measurable multiset B ∈ K σ such that the inequality m(AΔ B) < ε holds for any ε > 0.

9.6 Metric Spaces of Multisets Let us firstly remind some notions of theory of set metric spaces [20–24]. A set, for which the concept of element proximity is somehow specified, is usually called a space, and its elements are called points of space. A non-negative real function d X , defined on the direct product X × X of set X, is called a metric on 0 a set X if for any elements ) ( 0 ) the following axioms hold: (1 ) axiom of( symmetry d X (x, y) = d X (y, x); 2 axiom of identity d X (x, y) = 0 ⇔ x = y; 30 axiom of triangle d X (x, y) ≤ d X (x, z) + d X (z, y). A numerical value of function d X (x,y) is called a distance between elements x and y of set X. Requirements (10 )–(30 ) also imply a non-negativity of metric d X (x,y) ≥ 0. A set X with a given metric d X is called a metric space and denoted by (X, d X ). A metric space X is said to be metrically convex or d-convex if, for any different elements x,y ∈ X, x /= y, there exists a distinct element z ∈ X, z /= x, z /= y such that the triangle inequality (30 ) becomes into the equality d X (x, y) = d X (x, z) + d X (z, y). A metric d X satisfying this condition will be called d-convex. The metric convexity of space (X, d X ) is a spatial analogue of the ternary relation [x, z, y] ‘an element z is placed between elements x and y’, which is given by the condition inf (x, y) ≤ z ≤ sup(x, y) in any arbitrary partially ordered set. In order to evaluate a mutual disposition of elements of set X, other indicators of the elements’ proximity based on the incomplete axiomatics of metric space are also used. A non-negative real function d X , defined on the direct product X × X, satisfying the axiom of symmetry (10 ) and (40 ) coincidence condition d X (x,x) = 0 for all x ∈ X, is called a quasimetric, and a function d X , satisfying also the axiom of triangle (30 ), is called a pseudometric on a set X. Corresponding spaces (X, d X ) are called quasimetric and pseudometric. In quasi- and pseudometric spaces, the axiom of identity (20 ) for a function d X is not satisfied, in general, that is, the condition d X (x,y) = 0 does not imply the equality of elements x and y. The coincidence condition (40 ) is weaker than the axiom of identity (20 ). Therefore, any metric will also be a quasimetric and pseudometric. The converse is generally not true. A non-negative real function d X , defined on the direct product X × X and satisfying the axiom of symmetry (10 ) and axiom of identity (20 ), is called a symmetric on a set X. A set X with a symmetric d X is called a proximity space. A symmetric d X , satisfying (50 ) the inequality d X (x, y) ≤ maxz∈X [d X (x, z), d X (z, y)] for any elements of a set X, is called an ultrametric on a set X or ultrametric distance between elements x and y. It is easy to verify that the triangle axiom (30 ) also holds for an ultrametric. Thus, an ultrametric d X satisfies the axiomatics of metric space, and a space (X, d X ) with an ultrametric d X is metric. The inequality (50 ) is stronger than the inequality

9.6 Metric Spaces of Multisets

237

of triangle (30 ). Therefore, any ultrametric will also be a metric. The converse is generally not true. Consider possible approaches to formation of metric spaces (A, d A ) on families of sets and multisets [4, 13, 25–27]. Any set A ⊆ X = {x1 , . . . , xn } and multiset A ⊆ Z over a set X can be associated with an integer vector y = (y1 ,…,yn ), components of which are given for set A as yi = χ A (xi ) = 0, 1, and for multiset A as yi = k A (xi ) = 0, 1, 2, . . . , xi ∈ X . Let us use an analogy with the well-known types of metric spaces—vector spaces lp n and spaces of bounded numerical sequences lp . The Boolean P (X) of submultisets of n-dimensional multiset X = {kX (x1 ) ◦ x1 , . . . k X (xn ) ◦ xn } over a finite set X = {x 1 ,…,x n } is equivalent to the set Rn of n-dimensional vectors ys = (ys1 , . . . , ysn ) with real components ysi = k As (xi ) and forms metric spaces of finite multisets [4, 6, 7, 28]: P 1 = (P (X), d P1 ) with the Hamming-type metric d P1 ( A, B) =

n .

|k A (xi ) − k B (xi )|;

(9.35)

i=1

P 2 = (P (X), d P2 ) with the Euclid-type metric d P2 ( A, B) =

[ n .

]1/2 |k A (xi ) − k B (xi )|

2

;

(9.36)

i=1

( ) P p = P (X), d P p with the Minkowski-type metric [ n ]1/ p .| | |k A (xi ) − k B (xi ) p | d Pp ( A, B) = , p ≥ 1 is an integer;

(9.37)

i=1

P ∞ = (P (X), d P∞ ) with the Chebyshev-type metric d P∞ ( A, B) = max|k A (xi ) − k B (xi )|. i

(9.38)

The metric spaces P p , P ∞ of finite multisets are metrically convex, embeddable into the vector spaces lp n and spaces lp of bounded numerical sequences, everywhere dense, separable, incomplete, noncompact, but relatively compact. The σ-algebra S σ (X) of submultisets of multiset X = {kX (x1 ) ◦ x1 , kX (x2 ) ◦ x2 , . . .} over an arbitrary set X = {x 1 , . x 2 ,…} where any ∞ submultiset As ⊆ X satisfies the restriction k As (xi ) ≤ ks < ∞ or i=1 (k As (xi )) p ≤ p N k < ∞ for all integers p ≥ 1, is equivalent to the set R of bounded numerical sequences Ys = {ysi } = {ys1 , ys2 , . . .} with real terms ysi = k As (xi ), satisfying the ∞ . |ysi | p ≤ c p < ∞ for all integers p ≥ 1, and forms condition |ysi | ≤ c S < ∞ or i−1

metric spaces of bounded multisets [4, 6, 7]:

238

9 Mathematical Tools

S1 = (S σ (X), d S1 ) with the Hamming-type metric d S1 ( A, B) =

∞ .

|k A (xi ) − k B (xi )|;

(9.39)

i=1

S2 = (S σ (X), d S2 ) with the Euclid-type metric [ d S2 ( A, B) =

∞ .

]1/2 |k A (xi ) − k B (xi )|2

;

(9.40)

i=1

Sp = (S σ (X), d Sp ) with the Minkowski-type metric [ d Sp ( A, B) =

∞ .

]1/ p |k A (xi ) − k B (xi )|

p

, p ≥ 1 is an integer;

(9.41)

i=1

S∞ = (S σ (X), d S∞ ) with the Chebyshev-type metric d S∞ ( A, B) = supi |k A (xi ) − k B (xi )|.

(9.42)

The metric spaces Sp , S∞ of bounded multisets are metrically convex, embeddable in the spaces lp of bounded numerical sequences, separable, complete, noncompact, but locally compact. The σ-algebra S σ (Z) of submultisets of the measurable maximal multiset Z = {k Z (x1 ) ◦ x1 , k Z (x2 ) ◦ x2 , . . .} over an arbitrary set X = {x 1 , x 2 ,…} with a strongly σ-additive measure m forms metric spaces of measurable multisets (p ≥ 1 is an integer) [4, 6, 7, 27]: ) ( Z 1 p = S σ (Z), d Z p with the function d Z1 p ( A, B) = [m( AΔ B)]1/ p ,

(9.43)

) ( Z 2 p = S σ (Z), d Z2 p with the function d Z2 p ( A, B) = [m( AΔ B)/m(Z)]1/ p , (9.44) ) ( Z 3 p = S σ (Z), d Z3 p with the function d Z3 p ( A, B) = [m( AΔ B)/m( A ∪ B)]1/ p , (9.45) ( ) Z 4 p = S σ (Z), d Z4 p with the function d Z4 p ( A, B) = [m( AΔ B)/m( A + B)]1/ p . (9.46) The functions d Z3p and d Z4p are not defined for A = B = ∅, therefore it is assumed by definition that d Z3 p (∅, ∅) = d Z4 p (∅, ∅) = 0. The function d Z1p gives the mapping d Z1 p : S σ × S σ → R+ , and the functions d Z2p , d Z3p , d Z4p give the mapping d Zq p : S σ × S σ → R01 = [0, 1], q = 2, 3, 4. The functions d Zq p ( A, B), q = 1, 2, 3

9.6 Metric Spaces of Multisets

239

are pseudometrics satisfying the axiom of symmetry (10 ), triangle inequality (30 ) and coincidence condition (40 ). The function d Z4p (A,B) is a quasimetric satisfying the axiom of symmetry (10 ) and coincidence condition (40 ). We shall call the pseudometric d Z1p (A,B) on the space of measurable multisets (Sσ (Z),m) general or basic, the pseudometric d Z2p (A,B) completely averaged, the pseudometric d Z3p (A,B) locally averaged, the quasimetric d Z4p (A,B) averaged. The general pseudometric d Z1p (A,B) marks a proximity of two multisets A and B in the original space. The completely averaged pseudometric d Z2p (A,B) marks a proximity of two multisets A and B reduced to the maximal possible distance in the original space. The locally averaged pseudometric d Z3p (A,B) marks a proximity of two multisets A and B reduced to the joint “common part” of these two multisets in the original space. The averaged quasimetric d Z4p (A,B) marks a proximity of two multisets A and B reduced to the largest possible “common part” of these two multisets in the original space [7, 27]. Let us use relations (9.6), (9.8), (9.10) for multiplicity functions of the union A ∪ B, sum A + B, symmetric difference AΔ B of multisets, formula (9.35) for a for pseudometrics multiset measure m(A) and represent expressions (9.43)–(9.46) ( ) and quasimetric on the spaces Z q p = S σ (Z), d Zq p of measurable multisets as follows: d Z1 p ( A, B) =

[ .

]1/ p wi |k A (xi ) − k B (xi )|

,

(9.47)

i

[ d Z2 p ( A, B) =

.

wi |k A (xi ) − k B (xi )|/

i

d Z3 p ( A, B) =

[ .

.

]1/ p wi k Z (xi )

,

(9.48)

i

wi |k A (xi ) − k B (xi )|/

.

i

]1/ p wi max(k A (xi ), k B (xi ))

,

i

(9.49) d Z4 p ( A, B) =

[ . i

wi |k A (xi ) − k B (xi )|/

.

]1/ p wi (k A (xi ) + k B (xi ))

, (9.50)

i

where a domain X = {x 1 , x 2 ,…} is a finite or countable set, respectively. The general pseudometric d Z1 p ( A, B) = [m( AΔ B)]1/ p and completely averaged pseudometric d Z2 p ( A, B) = [m( AΔ B)/m(Z)]1/ p are continuous, uniformly continuous and equicontinuous functions, the locally averaged pseudometric d Z3p (A,B) = [m(AΔ B)/m( A ∪ B)]1/p and averaged quasimetric d Z4 p ( A, B) = [m( AΔ B)/m( A + B)]1/ p are piecewise continuous functions of its variables almost everywhere on the corresponding space for any number p. The pseudometrics d Zqp (A,B), q = 1, 2, 3 and quasimetric d Z4p (A,B) are not similar to any metrics of other metric spaces, in particular, the Minkowski-type metrics

240

9 Mathematical Tools

(9.37), (9.41). For wi = 1 for any i in formula (9.34), the general pseudometric d Z11 ( A, B) = | AΔ B| coincides with the Hamming-type metrics (9.35), (9.39). In general, the equality m(AΔ B) = 0 does not follow from the condition m(AΔ B) = 0. For multisets that differ by a multiset of zero measure, the condition m(AΔ B) = 0 implies the so-called m-equalities of such multisets: A =m B, A1 ∪ A2 =m B 1 ∪ B 2 , A1 ∩ A2 =m B 1 ∩ B 2 , which are valid almost everywhere on the σ-algebra S σ (Z) of measurable multisets. Then the axiom of identity (20 ) holds for the functions d Zqp (A,B), q = 1, 2, 3, 4. The functions d Z1p , d Z2p , d Z3p become metrics, and the function d Z4p becomes a symmetric almost everywhere on the corresponding space of measurable multisets. The metric spaces Z1p , Z2p of measurable multisets are homeomorphic, complete, separable. The spaces Z11 and Z21 are metrically convex and isometric to the space l 1 of bounded numerical sequences. The space Z31 is metrically convex. If we replace a multiplicity function k A (x) by a characteristic function χ A (x), then the metric spaces of finite, bounded and measurable multisets become into the corresponding metric spaces of sets that have many similar properties. For sets, the general metric d X 11 ( A, B) = m(AΔ B) is called the Fréchet–Nikodym–Aronshayan distance, and d X 11 (A, B) = |AΔ B| metric of measure. The locally averaged metric d X 31 (A, B) = m(AΔ B)/m( A ∪ B) is called the Steinhaus distance, and metric d X 31 (A, B) = |AΔ B|/|A ∪ B| is called the biotopic distance [21, 22, 24, 29]. The spaces of measurable sets and measurable multisets with the Petrovsky metrics (9.43, 9.44, 9.45, 9.46, 9.47, 9.48, 9.49 and 9.50) were firstly introduced by author [13, 14, 25]. The proofs of the above statements are given in books [3, 4, 6, 7].

References 1. Blizard, W.D.: Multiset theory. Notre Dame J. Formal Logic 30, 36–65 (1989) 2. Knuth, D.E.: The art of computer programming. Volume 2. Seminumerical algorithms. Addison-Wesley Professional, Reading (1998) 3. Petrovsky, A.B.: Osnovnye ponyatiya teorii mul’timnozhestv (Basic concepts of multiset theory). Editorial URSS, Moscow (2002).(in Russian) 4. Petrovsky, A.B.: Prostranstva mnozhestv i mul’timnozhestv (Spaces of sets and multisets). Editorial URSS, Moscow (2003).(in Russian) 5. Petrovsky, A.B.: Operations with multisets. Dokl. Math. 67(2), 296–299 (2003) 6. Petrovsky, A.B.: Prostranstva izmerimykh mnozhestv i mul’timnozhestv (Spaces of measurable sets and multisets). Poly Print Service, Moscow (2016).(in Russian) 7. Petrovsky, A.B.: Teoriya izmerimykh mnozhestv i mul’timnozhestv (Theory of measurable sets and multisets). Nauka, Moscow (2018).(in Russian) 8. Angelelli, I.: Liebniz’s misunderstanding of Nizolius’ notion of ‘multitudo.’ Notre Dame J. Formal Logic 6, 319–322 (1965) 9. Bourbaki, N.: Eléments de mathématiques. Livre I. Théorie des ensembles. Hermann, Paris (1957) 10. Bourbaki, N.: Eléments de mathématiques. Livre III. Topologie générale. Hermann, Paris (1964) 11. Hailperin, T.: Boole’s logic and probability. North-Holland, Amsterdam (1986). 12. Brink, C.: On Peirce’s notation for the logic of relatives. Transactions of the Charles S. Peirce Soc. 14, 285–304 (1978)

References

241

13. Petrovsky, A.B.: An axiomatic approach to metrization of multiset space. In: Multiple criteria decision making, pp. 129–140. Springer-Verlag, New York (1994) 14. Petrovsky, A.B.: Metricheskie prostranstva mul’timnozhestv (Metric spaces of multisets). Doklady Akademii nauk (Reports of the Academy of Sciences), 344(2), 175–177 (in Russian) (1995) 15. Sukhodolskiy, G.V.: Matematicheskie metody psikhologii (Mathematical methods of psychology). Publishing House of Saint Petersburg University, Saint Petersburg (2003).(in Russian) 16. Petrovsky, A.B.: Combinatorics of Multisets. Dokl. Math. 61(1), 151–154 (2000) 17. Singh, D., Ibrahim, A.M., Yohana, T., Singh, J.N.: An overview of the applications of multisets. Novi Sad J. Math. 37(2), 73–92 (2007) 18. Faure, R., Kaufman, A., Denis-Papin, M.: Mathématiques nouvelles. Dunod, Paris (1964) 19. Harary, F.: Graph theory. Addison-Wesley Publishing Company, Reading (1969) 20. Kolmogorov, A.N., Fomin, S.V.: Elementy teorii funktsiy i funktsional’nogo analiza (Elements of theory of functions and functional analysis). Nauka, Moscow (1968).(in Russian) 21. Deza, M.M., Deza, E.: Encyclopedia of distances. Springer-Verlag, Berlin (2009) 22. Deza, M.M., Laurent, M.: Geometry of cuts and metrics. Springer-Verlag, Berlin (1997) 23. Lyusternik, L.A., Sobolev, V.I.: Elementy funktsional’nogo analiza (Elements of functional analysis). Nauka, Moscow (1965).(in Russian) 24. O’Searcoid, M.: Metric spaces. Springer-Verlag, London (2009) 25. Petrovsky, A.B.: Aksiomaticheskiy podkhod k metrizatsii prostranstva mnozhestv (An axiomatic approach to metrization of set space). Deposited manuscript № 999–82. VINITI (All-Union Institute of Scientific and Technical Information), Moscow (in Russian) (1982) (in Russian) 26. Petrovsky, A.B.: Novye klassy metricheskikh prostranstv izmerimykh mnozhestv i mul’timnozhestv v klasternom analize (New classes of metric spaces of measurable sets and multisets in cluster analysis). In: Metody podderzhki prinyatiya resheniy. Trudy Instituta sistemnogo analiza RAN (Methods of decision support. Proceedings of the Institute for System Analysis of the Russian Academy of Sciences), vol. 12, pp. 54–67. Editorial URSS, Moscow (in Russian) (2005) (in Russian) 27. Petrovsky, A.B.: Metrics in multiset spaces. J. Intell. Fuzzy Syst. 36(4), 3073–3085 (2019) 28. Kosters, W.A., Laros, J.F.J.: Metrics for multisets. In: Research and Development in Intelligent Systems XXIV: Proceedings of AI-2007, the Twenty-seventh SGAI international conference on innovative techniques and applications of artificial intelligence, pp. 294–303. Springer-Verlag Limited, London (2008) 29. Marczewski, E., Steinhaus, H.: On a certain distance of sets and the corresponding distance of functions. Colloq. Math. 6, 319–327 (1958)

Conclusion

In today’s rapidly changing environment, preparing and making well-reasoned decisions are becoming more and more demanded professional activity. Complexity and connectivity of the choice problems being solved are growing, requirements for the result quality are increasing. In these conditions, there is rising a role of all kinds of competent specialists, consultants, experts knowing specifics of the problem area, which become necessary elements of decision making processes. While solving practical problems, decision makers need to apply such means from the existing arsenal that provide not only obtaining the required results, but also their explanation and justification. Theory of choice and decision making has a variety of tools for finding the most preferable or acceptable options to solve problems. There are two opposing points of view on the role of formal methods in practical tasks. Some people, who are not professionally proficient in mathematical methods, often believe that any problem can be formally translated into the mathematical language and then solved by its means. According to others, this approach is schematic and divorced from life. In situations of making difficult decisions, there is always a lack of information. Some of the necessary information is often missing, and the available information may be insufficient or contradictory. Therefore, in principle, it is impossible to transform completely such choice problems to correctly formulated mathematical problems. At the same time, it is namely the means included in decision making methods allow to find additional information that is necessary to formalize a real problem situation and to prepare it into a form suitable for applying mathematical methods and obtaining an acceptable result. However, a responsible leader will not unconditionally accept the result of solving a problem, obtained by some formal method. Reality is much more complicated than its simplified theoretical representation. An experienced leader or qualified specialist overcomes the lack of information with his knowledge, skill and intuition. The found result may prompt a manager to look differently at the problem being solved, suggest other possible options. To avoid the “dictate” of a solution method, it

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 A. B. Petrovsky, Group Verbal Decision Analysis, Studies in Systems, Decision and Control 451, https://doi.org/10.1007/978-3-031-16941-0

243

244

Conclusion

is advisable to use the following technique. One and the same task is solved not by one method, but by several different methods. And only if there is certain repeatability and multi-multiplicity of one of the results, then this result can be considered formally ‘correct’. Namely this approach to solve choice problems is the basis for group verbal decision analysis—a new scientific direction in decision making that uses the apparatus of multiset theory. In the developed methods, points of a metric space of measurable multisets represent multi-attribute objects, the similarity and difference between which is assessed by certain metric. Group verbal decision analysis makes possible the solution to new rather difficult practical tasks of classification, ordering and selection of objects, which are described by many numerical and verbal attributes, including in high-dimensional spaces, and are also present in several copies with different, in particular conflicting attribute values. These difficulties have a substantive base (for example, incorrect application of ‘averaging’ qualitative attributes) and formal reasons (for example, a high dimensionality of problem). The RAMPA (Ranking by Aggregated Multicriteria Pairwise comparisons of Alternatives) method is intended for group ordering of real objects without explicit description of objects by their attributes. The method allows to get a collective ranking of multi-attribute objects without pre-construction of individual rankings. The ARAMIS (Aggregation and Ranking Alternatives nearby Multi-attribute Ideal Situations) method is intended for group ordering of distinct copies of objects described by many verbal attributes. The method allows to find a collective ranking of objects without pre-construction of individual rankings. The CLAVA-HI (CLustering Alternatives with Verbal Attributes—HIerarchical) and CLAVA-NI (CLustering Alternatives with Verbal Attributes—Non-hIerarchical) methods are intended for group classifying of distinct copies of objects with many verbal attributes. The methods allow to generate object classes (clusters), a number of which is not fixed or fixed in advance. The MASKA (abbreviation of the Russian words Multi-Attribute Consistent Classification of Alternatives) method is intended for group classifying of distinct copies of objects with many verbal attributes, taking into account the inconsistent individual rules for sorting objects. The method allows to find generalized collective classification rules that aggregate individual rules and specify classes of consistently and inconsistently sorted objects. The HISCRA (HIerarchical Structuring CRiteria and Attributes), HISCRA-M (HIerarchical Structuring CRiteria and Attributes—Modified), SOCRATES (ShOrtening CRiteria and ATtributES) methods are intended for aggregating numerical and/or verbal attributes of objects. The methods allow to reduce a dimensionality of space where multi-attribute objects are represented as vectors/tuples or multisets of characteristics. The PAKS (abbreviation of the Russian words Progressive Aggregation of Classified States) and PAKS-M (Progressive Aggregation of the Classified Situations by

Conclusion

245

many Methods) technologies are intended for solving the ill-structured tasks of individual and collective multicriteria choice in a high-dimensional space. The technologies provide reducing a dimensionality of attribute space; constructing several hierarchical schemes of composite criteria and an integral quality index with verbal scales, which aggregate initial numerical and/or verbal attributes; and, finally, ordering and/or classification of multi-attribute objects using one or many various decision making methods. New tools expand possibilities for solving new, previously unresolved tasks and allow to solve well-known traditional tasks in more simple and effective ways. Underline that the new methods for group ordering and classifying of multi-attribute objects are unique and have no analogues. These methods operate with objects that are described by many numerical, symbolic and/or verbal attributes and exist in several different copies. In these methods, verbal attributes are not transformed into or replaced by any numerical ones as, for instance, in MAUT, TOPSIS, fuzzy methods. These methods take into account diversity and contradictions in knowledge of many experts and/or preferences of decision makers without requiring consistency of judgments. The methods do not need a development of special software and are easily implemented with standard spreadsheet software packages, such as Microsoft Excel. Verbal decision analysis is well known all over the world and is successfully used in practice. With a help of the developed methods and technologies, a lot of practical tasks were solved. These are analysis of science policy options, evaluation of topicality and priority of scientific directions and problems, formation of a scientific and technical program, competition of projects in a scientific foundation, assessment of research results, selection of a prospective computing complex, evaluation of effectiveness of organization activities. In conclusion, I should like to give one demonstrative and instructive case that illustrates how responsible and skilled leaders take into account in practice the results of solving the choice task, obtained by certain formal methods. In 1987– 1988, under guidance of the USSR Academy of Sciences the State scientific and technical Program on high-temperature superconductivity was formed. The Head of Program and Chairman of the Inter-agency Scientific Council was the President of the Academy, academician G.I. Marchuk. In the Council, competition commissions were created for individual sections of the Program, which included well-known scientists, highly qualified specialists. Members of commissions carried out a multicriteria expert assessment of applications for research, made recommendations on inclusion of competition projects in the Program. Then, based on expertise results and using peer review criteria, the Vice-President of the USSR Academy, Deputy Chairman of the Council, academician Yu.A. Osipyan formulated various decision rules for selection of applications. He together with consultants analyzed the application selections obtained in diverse ways. Generally, results of expertise by the Program sections coincided with results according to any formulated decision rule. At the same time, among the applications rejected by competition commissions, we discovered projects satisfying the decision rule. And among the supported projects, there were identified ones not satisfying the rule. These facts were reported to leaders of the Scientific Council.

246

Conclusion

Summarizing the results of competition, the Head of Program proposed to agree with the recommendations of competition commissions on the whole, as well as to include in the Program both the first and second kinds of projects mentioned above with the following explanation. The first projects should be accepted because they are the deserved works that have high scores upon the established selection criteria. The second projects should be accepted because they are the works important for the country and in need of additional support. In fact, based on theoretical results, the leader formulated a new pragmatic, reasonable rule of choice and substantiated it very convincingly. After discussing this proposal, the Scientific Council agreed with the arguments of the Chairman.