Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems: Inversion, Displacement, Asymmetry (International Series in Operations Research & Management Science, 348) 3031338367, 9783031338366

This book presents a systematic review of multidimensional normalization methods and addresses problems frequently encou

107 59 11MB

English Pages 321 [314] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Author
List of Abbreviations
List of Figures
List of Tables
Chapter 1: Introduction
1.1 The Problem of Multi-criteria Decision-Making
1.2 Multidimensional Normalization in the Context of Decision Problems
References
Chapter 2: The MCDM Rank Model
2.1 MCDM Rank Model
2.2 The Target Value of Attributes
2.3 Significance of Criteria: Multivariate Assessment
2.3.1 Subjective Weighting Methods: Pairwise Comparisons and AHP Process
2.3.2 Subjective Weighting Methods: Best-Worst Method
2.3.3 Objective Weighting Methods: Entropy, CRITIC, SD
Entropy Weighting Method (EWM) [26, 27, 37]
CRiteria Importance Through Inter-criteria Correlation (CRITIC) [28]
Standard Deviation (SD)
2.4 Aggregation of the Attributes: An Overview of Some Methods
2.4.1 Value Measurement Methods
Simple Additive Weighting (SAW) or Weighted Sum Method (WSM) [1]
Weighted Product Method (WPM) [39]
Weighted Aggregated Sum Product Assessment (WASPAS) [39]
Multi-Attributive Border Approximation Area Comparison (MABAC) [45]
Complex Proportional Assessment (COPRAS) Method [46]
2.4.2 Goal or Reference Level Models
Distance Metric
Reference Point (RP) Method [47]
COmbinative Distance-based ASsessment (CODAS)
Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) [1]
VIsekriterijumsko KOmpromisno Rangiranje (VIKOR) [40]
Gray Relation Analysis (GRA) [49, 50]
2.4.3 Outranking Techniques
Preference Ranking Organization METHod for Enrichment Evaluations (PROMETHEE) [41]
Organisazion, RangEment ot SynTEze de donnecs relationnelles (ORESTE) [43, 44]
2.4.4 Rank Reversal Problem
2.4.5 Distinguishability of the Performance Indicator of Alternatives
2.5 Design of the MCDM Model
2.6 Conclusions
References
Chapter 3: Normalization and MCDM Rank Model
3.1 General Principles for Normalizing Multidimensional Data
3.1.1 Preserving the Ordering Values of Attributes
3.1.2 Scale Invariance of Normalized Values of Attributes
3.1.3 Principle of Additive Significance of Attributes
3.1.4 Interpretation of Normalized Values of Attributes
3.2 Linear Multivariate Normalization Methods
3.2.1 How Is the Shift Factor Determined?
3.2.2 How Is Scaling Determined?
3.2.3 Disadvantages of Data Standardization
3.3 Asymmetry in the Distribution of Features
3.3.1 Measures of Asymmetry
3.4 The Outlier Detection
3.5 Non-linear Normalization: General Principles
3.6 Target Inversion in Multivariate Normalization
3.7 Isotropy of Scales of Normalized Values
3.8 Impact of the Choice of Normalization Method on the Rating
3.9 Conclusions
References
Chapter 4: Linear Methods for Multivariate Normalization
4.1 Basic Linear Methods for Multivariate Normalization
4.2 Scaling Factor Ratios
4.3 Invariant Properties of Linear Normalization Methods
4.3.1 Invariance of the Dispositions of Alternatives
4.3.2 Isotropic of Scaling: Invariance of Rating
4.3.3 Invariants of Numerical Characteristics of the Sample
4.4 Re-normalization
4.4.1 Invariant Re-normalization Properties for Linear Methods
4.5 Meaningful Interpretation of Linear Scales
4.6 Some Features of Individual Linear Normalization Methods
4.6.1 Max-method of Normalization
4.6.2 The Displacement of Normalized Values in Domains for the Sum and Vec Methods
4.6.3 Loss of Contribution to the Performance Indicator in the Max-Min Method
4.6.4 dSum Method of Normalization
4.6.5 Z-score Method of Normalization
4.6.6 mIQR Method of Normalization
4.6.7 mMAD-Method of Normalization
4.7 Conclusions
References
Chapter 5: Inversion of Normalized Values: ReS-Algorithm
5.1 Optimization Goal Inversion
5.2 Permissible Pairs of Transformations to the Benefit and Cost Criteria
5.3 Overview of Inverse Transforms and Compliance with Multidimensional Data Normalization Requirements
5.3.1 Max Method of Normalization
5.3.2 Sum Method of Normalization
5.3.3 Vec Method of Normalization
5.3.4 Max-Min Method of Normalization
5.3.5 dSum Method of Normalization
5.3.6 Z-score Method of Normalization
5.4 Universal Goal Inversion Algorithm: ReS-Algorithm
5.4.1 Reverse Sorting Algorithm
5.4.2 ReS-Algorithm
5.4.3 Basic Properties of the ReS-Algorithm
5.5 Conclusions
References
Chapter 6: Rank Reversal in MCDM Models: Contribution of the Normalization
6.1 Main Factors Determining Rank Reversal in MCDM Problems
6.2 Relative Preference for Different Normalizations
6.3 Assessing the Contribution of an Individual Attribute to the Performance Indicator of an Alternative
6.4 Rank Reversal Due to Normalization
6.5 Conclusions
References
Chapter 7: Coordination of Scales of Normalized Values: IZ-Method
7.1 Ratio of Feature Scales
7.2 The Domains Displacement of Normalized Values of Various Attributes
7.3 Attribute Equalizer
7.3.1 Transformation of Normalized Values Using Fixed Point Technique
7.4 Elimination of Displacement in the Domains of Normalized Values: IZ-Method
7.5 Choice of Conditionally General Scale [I, Z] Normalized Values
7.6 Invariant Properties of the IZ-Method
7.7 Generalization of the IZ-Method
7.8 IZ Transformation for Non-linear Aggregation Methods: Example for COPRAS, WPM, and WASPAS Methods
7.9 Conclusions
References
Chapter 8: MS-Transformation of Z-Score
8.1 Standardized Scoring
8.2 MS-Transformation of Z-Score
8.3 Selecting a Conditionally Common Scale [I, Z] for MS-Transformation
8.4 Invariant Properties of MS-Transformation
8.5 MS-Transformations for Non-linear Aggregation Methods: Example for WPM and WASPAS Methods
8.6 Conclusions
References
Chapter 9: Non-linear Multivariate Normalization Methods
9.1 Non-linear Data Transformation as a Way to Eliminate Asymmetry in the Distribution of Features
9.2 Non-linear Data Pre-processing Procedures. Transition to the Non-linear Scales
9.3 Transformation of Normalized Data: Post-processing of Data
9.3.1 Post-processing with Max-Min Normalization
9.3.2 Post-processing with Z-Score Normalization
9.3.3 Weighted Product Model and Post-processing of Normalized Values
9.4 Inversion of Normalized Values and Matching the Areas of Normalized Values of Different Criteria
9.5 Numerical Example of Data Pre-processing
9.6 Numerical Example of Data Post-processing
9.7 Conclusions
References
Chapter 10: Normalization for the Case ``Nominal Value the Best´´
10.1 Target Criteria and Target-Based Normalization
10.2 Review of Target Normalization Methods
10.3 Generalization of Normalization Methods of Target Criteria for Linear Case
10.4 Comparative Normalization of Target Criteria Using Linear Methods
10.5 Normalization of Target Criteria: Non-linear Methods-Concept of Harrington´s Desirability Function
10.5.1 One-Sided DF for LTB and STB Criteria
10.5.2 Two-Sided DF for the NTB Criteria
10.5.3 Consistent DF-Normalization for LTB, STB, and NTB Criteria
10.5.4 The Desirability Function: Power Form
10.5.5 The Desirability Function: Gaussian Form
10.6 Conclusions
References
Chapter 11: Comparative Results of Ranking of Alternatives Using Different Normalization Methods: Computational Experiment
11.1 Methodology of Computational Experiment
11.2 Normalization Methods
11.3 A Decision Matrix Generation with High Sensitivity of Rank to the Normalization Methods
11.4 Graphical Illustration of Normalized Values
11.5 Results of Ranking of Alternatives for Decision Matrix D0
11.5.1 Borda Voting Principles
11.5.2 Distinguishability of Ratings
11.6 Results of Ranking of Alternatives for Decision Matrix D1
11.6.1 Borda Count
11.6.2 Distinguishability of Ratings
11.7 Conclusions
References
Chapter 12: Significant Difference of the Performance Indicator of Alternatives
12.1 Relative Difference in the Performance Indicator of Alternatives
12.2 Ranking Algorithm Using Distinguishability Criteria
12.3 Numerical Example of the Ranking of Alternatives, Taking into Account the Criterion of Distinguishability
12.4 Assessing the Significance of the Difference in the Ratings of Alternatives in the VIKOR Method
12.5 Evaluation of the Distinguishability of the Rating When the Decision Matrix Is Varied
12.6 Statistics of the Performance Indicator of Alternatives When Varying the Decision Matrix
12.6.1 Statistical Experiment
12.6.2 Distribution of the Performance Indicator of Alternatives
12.7 Ranking Alternatives Based on Simple Comparison of the Rating
12.8 Evaluation of the Criterion Value of the Performance Indicator Based on the Error in the Evaluation of the Decision Matrix
12.9 Conclusions
References
Conclusion
Appendix: Program Code ``Normalization of Multidimensional Data´´ for MatLab System
Recommend Papers

Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems: Inversion, Displacement, Asymmetry (International Series in Operations Research & Management Science, 348)
 3031338367, 9783031338366

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

International Series in Operations Research & Management Science

Irik Z. Mukhametzyanov

Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems Inversion, Displacement, Asymmetry

International Series in Operations Research & Management Science Founding Editor Frederick S. Hillier, Stanford University, Stanford, CA, USA

Volume 348 Series Editor Camille C. Price, Department of Computer Science, Stephen F. Austin State University, Nacogdoches, TX, USA Editorial Board Members Emanuele Borgonovo, Department of Decision Sciences, Bocconi University, Milan, Italy Barry L. Nelson, Department of Industrial Engineering & Management Sciences, Northwestern University, Evanston, IL, USA Bruce W. Patty, Veritec Solutions, Mill Valley, CA, USA Michael Pinedo, Stern School of Business, New York University, New York, NY, USA Robert J. Vanderbei, Princeton University, Princeton, NJ, USA Associate Editor Joe Zhu, Foisie Business School, Worcester Polytechnic Institute, Worcester, MA, USA

The book series International Series in Operations Research and Management Science encompasses the various areas of operations research and management science. Both theoretical and applied books are included. It describes current advances anywhere in the world that are at the cutting edge of the field. The series is aimed especially at researchers, advanced graduate students, and sophisticated practitioners. The series features three types of books: • Advanced expository books that extend and unify our understanding of particular areas. • Research monographs that make substantial contributions to knowledge. • Handbooks that define the new state of the art in particular areas. Each handbook will be edited by a leading authority in the area who will organize a team of experts on various aspects of the topic to write individual chapters. A handbook may emphasize expository surveys or completely new advances (either research or applications) or a combination of both. The series emphasizes the following four areas: Mathematical Programming : Including linear programming, integer programming, nonlinear programming, interior point methods, game theory, network optimization models, combinatorics, equilibrium programming, complementarity theory, multiobjective optimization, dynamic programming, stochastic programming, complexity theory, etc. Applied Probability: Including queuing theory, simulation, renewal theory, Brownian motion and diffusion processes, decision analysis, Markov decision processes, reliability theory, forecasting, other stochastic processes motivated by applications, etc. Production and Operations Management: Including inventory theory, production scheduling, capacity planning, facility location, supply chain management, distribution systems, materials requirements planning, just-in-time systems, flexible manufacturing systems, design of production lines, logistical planning, strategic issues, etc. Applications of Operations Research and Management Science: Including telecommunications, health care, capital budgeting and finance, economics, marketing, public policy, military operations research, humanitarian relief and disaster mitigation, service operations, transportation systems, etc. This book series is indexed in Scopus.

Irik Z. Mukhametzyanov

Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems Inversion, Displacement, Asymmetry

Irik Z. Mukhametzyanov Department of Information Technologies and Applied Mathematics Ufa State Petroleum Technological University Ufa, Russia

ISSN 0884-8289 ISSN 2214-7934 (electronic) International Series in Operations Research & Management Science ISBN 978-3-031-33836-6 ISBN 978-3-031-33837-3 (eBook) https://doi.org/10.1007/978-3-031-33837-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Dedicated to Margaret To my faithful life partner

Preface

The research subject of the monograph, on the one hand, is rather narrow—normalization of multidimensional data and, on the other hand, is wide, since normalization of data is the necessary and important part of multidimensional analysis. From the standpoint of application-oriented tasks, but all natural, technical, socio-economic, etc. objects and phenomena are characterized by a large number of different properties, the subject of research is very wide. The key goal of normalization is to bring various data in various units of measurement and ranges of values to a single form that will allow them to be compared with each other or used to calculate the similarity of objects. Two factors motivated me to write the book. In the vast majority of studies on multivariate analysis, the features of different types of data normalization and the reasons for their use are either not considered at all or they are mentioned only in passing and without disclosing the essence. Secondly, there is a “blind” use of individual normalization methods. But upon closer examination, it turns out that some signs were unconsciously placed in a privileged position and began to influence the result much more strongly. In the proposed book, the presentation of the problems of multidimensional normalization and their solution in the form of new algorithms and methods, as well as numerous applied examples, is made in the context of problems of multicriteria decision-making. This is due to the scientific interest of the author in this area over the past 5 years. The stated approaches are equally transferred to any other sections of multivariate analysis, for example, multivariate classification, cluster analysis, etc. The need to normalize data samples is due to the nature of the data processing algorithms used. In the case of problems of multi-criteria decision-making, in order to compare alternatives, it is necessary to determine the resulting integral indicator obtained as a result of the transformation (reduction) of a certain subset of individual indicators. At the same time, it is necessary to exclude the priority of the contribution of individual indicators, due to the peculiarity of the measurement scales and the distribution of data. This is a feature of the multivariate normalization procedure. All vii

viii

Preface

signs must be equal in their possible influence. This is not about the importance of a feature, the value of which is established using various procedures for estimating weights. The main problems of multidimensional normalization are related to the structure of empirical data on the object and are included as the book sub-title. This is a question of an adequate aggregation of the profit and the cost attributes (inversion), the problem of an adequate aggregation of object attributes in the presence of displacement domains of attributes relative to each other and in the presence of values asymmetry in the domain. In the monograph on the base of linear transformations of well-known normalization methods, the author offers an original solution to mentioned problems in the form of the following three new methods: ReS-algorithm, IZ-method, and MS-method. The solution, as often happens in the process of scientific research, arose on its own and completely (April 17, 2018). Of course, this was preceded by a lengthy process of finding the answer to the question which of the normalization formulas is preferable. For example, is it the most popular linear normalization according to the “minimax” or is it more reliable to focus on typical values but not on the extreme ones during normalization, i.e. to consider statistical characteristics of the data. If the ReS-algorithm is the best in the class of inverse transformations of cost attributes into benefit attributes and vice versa, then the correctness of multivariate normalization procedures that allow compression and data shift is mostly related to the task and source data. Aligning the boundaries of the domains of normalized values (IZ-method) and equalizing the average values of different attributes (MS-method) eliminate the priority of individual attributes in some cases. Therefore, the author considers them as good and possible in many cases variants of multivariate normalization. Equally important are the invariant properties of linear normalization methods, formulated by the author. The correct use of it often allows you to resolve simple problems and avoid apparent errors. The book gives a lot of information about what kind of difficulties may arise in multidimensional normalization, what to expect of such problems, and how to solve these problems. The book may be of value to researchers and specialists in the field of multivariate analysis and decision-making. As a bonus, the author included in the appendix of the book m-file protocols for the MatLab system that implement the multidimensional normalization methods presented in the book. The application program will allow you to obtain numerical and graphical results of the most popular normalization methods, including new methods: ReS-algorithm, IZ and MS-methods. I hope that readers will be satisfied with this book and find in it new and fruitful ideas for scientific research. Ufa, Russia January 17, 2021

Irik Z. Mukhametzyanov

Contents

1

2

3

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 The Problem of Multi-criteria Decision-Making . . . . . . . . . . . 1.2 Multidimensional Normalization in the Context of Decision Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. .

1 1

. .

3 12

The MCDM Rank Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 MCDM Rank Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The Target Value of Attributes . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Significance of Criteria: Multivariate Assessment . . . . . . . . . . . 2.3.1 Subjective Weighting Methods: Pairwise Comparisons and AHP Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Subjective Weighting Methods: Best–Worst Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Objective Weighting Methods: Entropy, CRITIC, SD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Aggregation of the Attributes: An Overview of Some Methods . 2.4.1 Value Measurement Methods . . . . . . . . . . . . . . . . . . . 2.4.2 Goal or Reference Level Models . . . . . . . . . . . . . . . . . 2.4.3 Outranking Techniques . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Rank Reversal Problem . . . . . . . . . . . . . . . . . . . . . . . 2.4.5 Distinguishability of the Performance Indicator of Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Design of the MCDM Model . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15 15 18 19

Normalization and MCDM Rank Model . . . . . . . . . . . . . . . . . . . . 3.1 General Principles for Normalizing Multidimensional Data . . . 3.1.1 Preserving the Ordering Values of Attributes . . . . . . . 3.1.2 Scale Invariance of Normalized Values of Attributes .

41 41 42 44

. . . .

20 21 22 24 25 26 32 34 35 36 37 38

ix

x

4

5

Contents

3.1.3 Principle of Additive Significance of Attributes . . . . . . 3.1.4 Interpretation of Normalized Values of Attributes . . . . . 3.2 Linear Multivariate Normalization Methods . . . . . . . . . . . . . . . 3.2.1 How Is the Shift Factor Determined? . . . . . . . . . . . . . . 3.2.2 How Is Scaling Determined? . . . . . . . . . . . . . . . . . . . . 3.2.3 Disadvantages of Data Standardization . . . . . . . . . . . . 3.3 Asymmetry in the Distribution of Features . . . . . . . . . . . . . . . . 3.3.1 Measures of Asymmetry . . . . . . . . . . . . . . . . . . . . . . . 3.4 The Outlier Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Non-linear Normalization: General Principles . . . . . . . . . . . . . . 3.6 Target Inversion in Multivariate Normalization . . . . . . . . . . . . . 3.7 Isotropy of Scales of Normalized Values . . . . . . . . . . . . . . . . . 3.8 Impact of the Choice of Normalization Method on the Rating . . 3.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45 46 47 48 50 52 53 56 57 60 62 63 64 68 68

Linear Methods for Multivariate Normalization . . . . . . . . . . . . . . . 4.1 Basic Linear Methods for Multivariate Normalization . . . . . . . . 4.2 Scaling Factor Ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Invariant Properties of Linear Normalization Methods . . . . . . . . 4.3.1 Invariance of the Dispositions of Alternatives . . . . . . . . 4.3.2 Isotropic of Scaling: Invariance of Rating . . . . . . . . . . 4.3.3 Invariants of Numerical Characteristics of the Sample . . 4.4 Re-normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Invariant Re-normalization Properties for Linear Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Meaningful Interpretation of Linear Scales . . . . . . . . . . . . . . . . 4.6 Some Features of Individual Linear Normalization Methods . . . 4.6.1 Max-method of Normalization . . . . . . . . . . . . . . . . . . 4.6.2 The Displacement of Normalized Values in Domains for the Sum and Vec Methods . . . . . . . . . . . . . . . . . . . . . 4.6.3 Loss of Contribution to the Performance Indicator in the Max-Min Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 dSum Method of Normalization . . . . . . . . . . . . . . . . . 4.6.5 Z-score Method of Normalization . . . . . . . . . . . . . . . . 4.6.6 mIQR Method of Normalization . . . . . . . . . . . . . . . . . 4.6.7 mMAD-Method of Normalization . . . . . . . . . . . . . . . . 4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71 71 78 80 80 81 83 86

Inversion of Normalized Values: ReS-Algorithm . . . . . . . . . . . . . . . 5.1 Optimization Goal Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Permissible Pairs of Transformations to the Benefit and Cost Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95 95

86 89 90 90 90 91 91 91 92 92 92 93

96

Contents

Overview of Inverse Transforms and Compliance with Multidimensional Data Normalization Requirements . . . . . . . . . 5.3.1 Max Method of Normalization . . . . . . . . . . . . . . . . . . 5.3.2 Sum Method of Normalization . . . . . . . . . . . . . . . . . . 5.3.3 Vec Method of Normalization . . . . . . . . . . . . . . . . . . . 5.3.4 Max-Min Method of Normalization . . . . . . . . . . . . . . . 5.3.5 dSum Method of Normalization . . . . . . . . . . . . . . . . . 5.3.6 Z-score Method of Normalization . . . . . . . . . . . . . . . . 5.4 Universal Goal Inversion Algorithm: ReS-Algorithm . . . . . . . . . 5.4.1 Reverse Sorting Algorithm . . . . . . . . . . . . . . . . . . . . . 5.4.2 ReS-Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Basic Properties of the ReS-Algorithm . . . . . . . . . . . . . 5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi

5.3

6

7

8

Rank Reversal in MCDM Models: Contribution of the Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Main Factors Determining Rank Reversal in MCDM Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Relative Preference for Different Normalizations . . . . . . . . . . . . 6.3 Assessing the Contribution of an Individual Attribute to the Performance Indicator of an Alternative . . . . . . . . . . . . . . . . . . 6.4 Rank Reversal Due to Normalization . . . . . . . . . . . . . . . . . . . . 6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Coordination of Scales of Normalized Values: IZ-Method . . . . . . . . 7.1 Ratio of Feature Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 The Domains Displacement of Normalized Values of Various Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Attribute Equalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Transformation of Normalized Values Using Fixed Point Technique . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Elimination of Displacement in the Domains of Normalized Values: IZ-Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Choice of Conditionally General Scale [I, Z] Normalized Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Invariant Properties of the IZ-Method . . . . . . . . . . . . . . . . . . . . 7.7 Generalization of the IZ-Method . . . . . . . . . . . . . . . . . . . . . . . 7.8 IZ Transformation for Non-linear Aggregation Methods: Example for COPRAS, WPM, and WASPAS Methods . . . . . . . 7.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

98 98 100 100 102 102 103 104 104 105 107 109 109 111 111 112 114 117 126 126 129 129 131 133 133 134 136 139 140 142 147 148

MS-Transformation of Z-Score . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 8.1 Standardized Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 8.2 MS-Transformation of Z-Score . . . . . . . . . . . . . . . . . . . . . . . . 153

xii

Contents

8.3

Selecting a Conditionally Common Scale [I, Z] for MSTransformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Invariant Properties of MS-Transformation . . . . . . . . . . . . . . . . 8.5 MS-Transformations for Non-linear Aggregation Methods: Example for WPM and WASPAS Methods . . . . . . . . . . . . . . . 8.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

10

11

Non-linear Multivariate Normalization Methods . . . . . . . . . . . . . . . 9.1 Non-linear Data Transformation as a Way to Eliminate Asymmetry in the Distribution of Features . . . . . . . . . . . . . . . . 9.2 Non-linear Data Pre-processing Procedures. Transition to the Non-linear Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Transformation of Normalized Data: Post-processing of Data . . . 9.3.1 Post-processing with Max-Min Normalization . . . . . . . 9.3.2 Post-processing with Z-Score Normalization . . . . . . . . 9.3.3 Weighted Product Model and Post-processing of Normalized Values . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Inversion of Normalized Values and Matching the Areas of Normalized Values of Different Criteria . . . . . . . . . . . . . . . . . . 9.5 Numerical Example of Data Pre-processing . . . . . . . . . . . . . . . 9.6 Numerical Example of Data Post-processing . . . . . . . . . . . . . . . 9.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Normalization for the Case “Nominal Value the Best” . . . . . . . . . . 10.1 Target Criteria and Target-Based Normalization . . . . . . . . . . . . 10.2 Review of Target Normalization Methods . . . . . . . . . . . . . . . . . 10.3 Generalization of Normalization Methods of Target Criteria for Linear Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Comparative Normalization of Target Criteria Using Linear Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Normalization of Target Criteria: Non-linear Methods—Concept of Harrington’s Desirability Function . . . . . . . . . . . . . . . . . . . . 10.5.1 One-Sided DF for LTB and STB Criteria . . . . . . . . . . . 10.5.2 Two-Sided DF for the NTB Criteria . . . . . . . . . . . . . . 10.5.3 Consistent DF-Normalization for LTB, STB, and NTB Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.4 The Desirability Function: Power Form . . . . . . . . . . . . 10.5.5 The Desirability Function: Gaussian Form . . . . . . . . . . 10.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

157 159 161 165 166 167 167 171 176 177 182 187 188 189 191 192 193 195 195 198 202 207 208 208 210 213 215 217 218 219

Comparative Results of Ranking of Alternatives Using Different Normalization Methods: Computational Experiment . . . . . . . . . . . 221 11.1 Methodology of Computational Experiment . . . . . . . . . . . . . . . 221

Contents

11.2 11.3

Normalization Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Decision Matrix Generation with High Sensitivity of Rank to the Normalization Methods . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Graphical Illustration of Normalized Values . . . . . . . . . . . . . . . 11.5 Results of Ranking of Alternatives for Decision Matrix D0 . . . . 11.5.1 Borda Voting Principles . . . . . . . . . . . . . . . . . . . . . . . 11.5.2 Distinguishability of Ratings . . . . . . . . . . . . . . . . . . . . 11.6 Results of Ranking of Alternatives for Decision Matrix D1 . . . . 11.6.1 Borda Count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.2 Distinguishability of Ratings . . . . . . . . . . . . . . . . . . . . 11.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Significant Difference of the Performance Indicator of Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Relative Difference in the Performance Indicator of Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Ranking Algorithm Using Distinguishability Criteria . . . . . . . . . 12.3 Numerical Example of the Ranking of Alternatives, Taking into Account the Criterion of Distinguishability . . . . . . . . . . . . . . . . 12.4 Assessing the Significance of the Difference in the Ratings of Alternatives in the VIKOR Method . . . . . . . . . . . . . . . . . . . . . 12.5 Evaluation of the Distinguishability of the Rating When the Decision Matrix Is Varied . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6 Statistics of the Performance Indicator of Alternatives When Varying the Decision Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.1 Statistical Experiment . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.2 Distribution of the Performance Indicator of Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.7 Ranking Alternatives Based on Simple Comparison of the Rating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.8 Evaluation of the Criterion Value of the Performance Indicator Based on the Error in the Evaluation of the Decision Matrix . . . 12.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xiii

222 222 227 228 235 236 236 237 243 245 245 247 247 249 250 252 252 256 256 258 260 263 273 273

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Appendix: Program Code “Normalization of Multidimensional Data” for MatLab System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277

About the Author

Irik Z. Mukhametzyanov is a Doctor of Physical and Mathematical Sciences (in Russian Academic Degrees), Professor of Ufa State Petroleum Technological University, Department of Information Technology and Applied Mathematics, Ufa, Bashkortostan, Russia. Doctoral thesis: “Structural organization of macromolecular associates in petroleum disperse systems” (Bashkir State University, 2004). He is the author of 4 books and more than 120 scientific papers devoted to the mathematical modeling of objects and processes in various systems. His current research interests include operational research and optimization in socio-economic systems, including decision support systems, multi-criteria decision models, fuzzy systems and related fields. Irik Z. Mukhametzyanov is a member of the International Society for Multi-Criteria Decision Making, a member of the editorial board of international scientific journals “Decision Making: Applications in Management and Engineering” and “Operations Research and Engineering Letters.” He is currently serving as Active Reviewer of many reputed international journals.

xv

List of Abbreviations

Decision-making problem MADM MCDA MCDM MODM MOORA

Multi-attribute decision-making Multiple-criteria decision analysis Multi-criteria decision-making Multi-objective decision-making Multi-objective optimization by ratio analysis

Ranking method of aggregation CODAS COPRAS ELECTRE GRA MABAC MOORA RP ORESTE PROMETHEE SAW TOPSIS VIKOR WASPAS WPM WSM

COmbinative Distance-based ASsessment Complex Proportional Assessment ELimination Et Choix Traduisant la REalité ELimination (in French) Gray Relation Analysis Multi-Attributive Border Approximation area Comparison Multi-Objective Optimization by Ratio Analysis Reference Point Organísation, Rangement Et SynThèse de DonnéEs Relarionnelles (in French) Preference Ranking Organization METHod for Enrichment of Evaluations Simple Additive Weighting Technique for Order of Preference by Similarity to Ideal Solution VIsekriterijumsko KOmpromisno Rangiranje (in Serbian) Weighted Aggregated Sum Product Assessment Weighted Product Method Weighted Sum Method

Weighting methods AHP BWM

Analytic Hierarchy Process Best-Worst Method xvii

xviii

List of Abbreviations

CRITIC DEMATEL EVM EWM FUCOM SD SWARA

CRiteria Importance Through Inter-criteria Correlation DEcision-MAking Trial and Evaluation Laboratory Eigen Vector Method Entropy Weighting Method FUll COnsistency Method Standard Deviation method Stepwise Weight Assessment Ratio Analysis

Designation Ai Cj+, CjDM aij āj rij ¯rj ajmax ajmin wj Qi dQi

Alternatives (objects) (i=1,. . ., m) Criteria or objects properties ( j=1,. . ., n), (+) benefit, (-) cost Decision matrix [m×n] Elements of decision matrix (DM) Average value of jth criterion Normalized value of elements of decision matrix Average value (normalized) of jth criterion Maximum element in criteria j Minimum element in criteria j Weight or importance of jth criteria The performance indicator of ith alternative (object) The relative performance indicator

Normalization methods (linear) dSum

amax - aij j

r ij = 1 -

m i=1

IZ Max

- aij Þ ðamax j

IZ-method or IZ transform a aij r ij = maxij aij = amax j

i

aij - amin j amax - amin j j aij - mdj IQRj , mdj

Max-Min

r ij =

mIQR

r ij = attribute

mMAD

rij =

aij - mdj , sj

= mediani(aij), IQRj: Interquartile Range of jth

mdj = mediani(aij), sj =

MS ReS-algorithm Sum

Median Absolute Deviation MS-method or MS transform Reverse Sorting algorithm a r ij = m ij jaij j

Vec

r ij =

i=1

r ij =



m i=1

aij - mdj

aij m i=1

Z, Z-score

1 m

a2ij

aij - aj sj ,

aj =

1 m



m i=1

aij , sj =

1 m



m i=1

aij - aj

2

0:5

2

0:5

,

List of Abbreviations

Non-Linear normalization methods GBF NormCDF PwL Sgm SSp

Gaussian-based function Normal cumulative distribution function Piecewise linear function Sigmoid function S-shaped spline function

Other Abbreviations DFs IQR LTB MC NIS NTB PIS QMs RRP STB

Desirability Functions The interquartile range Larger-The-Better Median Couple or MedCouple Negative Ideal Solution Nominal-The-Best Positive Ideal Solution The Quality Measures Rank Reversal Phenomenon Smaller-The-Better

xix

List of Figures

Fig. 1.1 Fig. 3.1 Fig. 3.2

Fig. 3.3 Fig. 3.4 Fig. 3.5 Fig. 3.6 Fig. 3.7 Fig. 3.8 Fig. 3.9

Fig. 3.10 Fig. 3.11

Illustration of anisotropic feature scaling in three dimensions for some set of 8 alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Normalization based on linear and non-linear data transformation for benefit (a) and cost (b) criteria . . . . . . . . . . . . . . . . Normalization based on piecewise linear and non-linear data transformation for target criteria for LTB (a) and STB (b) cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The displacement of domains of different criteria caused by the choice of a shift parameter aj* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The displacement of domains of different criteria caused by the choice of scaling factor kj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The displacement of domains of different criteria caused by the joint choice of the shift factor aj* and the scaling factor kj . . . . . . Data normalization (Vec method) with outlier identification (IQRa-method) and their subsequent processing .. . . . . . .. . . . . . .. . . Data normalization (Max-Min-method) with outlier identification (IQRa-method) and their subsequent processing . . Non-linear transformation of normalized values using strictly monotonic functions on the interval [0, 1] . . . . . . . . . . . . . . . . . . . . . . . . Non-linear transformation of normalized values using the error function and the logistic function on a symmetrical, relative to zero, interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Isotropic (Max-Min) and anisotropic normalization for a problem with three criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An illustration of the mutual arrangement of domains of normalized values and local priorities of alternatives of I–III ranks. A decision matrix for which the I-rank alternatives are the same for the 5 basic linear normalization methods. SAW method of aggregation . .. . . . .. . . . .. . . .. . . . .. . . . .. . . .. . . . .. . . . .. . . .. . . . .. . . . .. .

6 43

44 49 51 53 59 60 61

62 64

66

xxi

xxii

Fig. 3.12

Fig. 4.1

Fig. 4.2

Fig. 4.3

Fig. 4.4

Fig. 4.5

Fig. 4.6

Fig. 4.7

Fig. 4.8

Fig. 5.1

Fig. 5.2

Fig. 5.3

Fig. 5.4 Fig. 5.5

List of Figures

Decision matrix for which the I-rank alternatives are different for the 5 basic linear normalization methods. An illustration of the mutual arrangement of domains of normalized values and local priorities of alternatives of I–III ranks. SAW method of aggregation . . . . .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .. . . . . . . 67 Domains of normalized values for various linear normalization methods without displacement (aj* = 0). Initial data according to Table 4.2, second attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Domains of normalized values for various linear normalization methods with displacement (aj* ≠ 0). Initial data according to Table 4.2, second attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Normalized values and relative position of domains of five different attributes relative to each other for basic linear normalization methods. Initial data according to Table 4.2 . . . . . . 76 Normalized values and relative position of domains of 11 different attributes relative to each other for the problem of choosing the location of the logistics flows, D[8 × 11]. The location selection of tri-modal LC and logistical flows [12] . . . . . 78 Normalized values and relative position of domains of 13 different attributes relative to each other for the problem of rating banks, D[5 × 13]. A case of ranking Serbian bank [13] . .. . . . . .. . 78 Normalized values and mutual arrangement of domains of 7 different attributes relative to each other for the problem of selecting components in the manufacture of products, D [8 × 7]. Flexible manufacturing system selection [4] . . . . . . . . . . . . . . . . . . . . . . 79 Correspondence of dispositions of natural (ai2) and normalized values (ri2) for linear normalization methods. Rationing of the second attribute for 8 alternatives according to Table 4.2 . . . . . . . 82 Correspondence of dispositions of natural (ai2) and normalized values (ri2) for linear normalization methods. Normalization of the second attribute for seven different alternatives according to the Table 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Inversion iMax1 = 1–r (5.1a in Table 5.1) for the Max normalization method. Initial data according to Table 2.1, third attribute . . . . . .. . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . .. . . . 99 Inversion iMax2 = rmin/r (5.1b in Table 5.1) for the Max normalization method. Initial data according to Table 2.1, third attribute . . . . . .. . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . .. . . . 99 Inversion iMax3 = Markovič (5.1c in Table 5.1) for the Max normalization method. Initial data according to Table 2.1, third attribute . . . . . .. . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . .. . . . 100 Inversion iSum (5.2 in Table 5.1) for the Sum normalization method. Initial data according to Table 2.1, third attribute . . . . . . 101 Inversion iVec (5.3 in Table 5.1) for the Vec normalization method. Initial data according to Table 2.1, third attribute . . . . . . 101

List of Figures

Fig. 5.6

Fig. 5.7 Fig. 5.8 Fig. 5.9

Fig. 5.10 Fig. 5.11 Fig. 5.12

Fig. 6.1 Fig. 6.2

Fig. 6.3

Fig. 6.4

Fig. 6.5

Fig. 7.1

Fig. 7.2 Fig. 7.3

Fig. 7.4

xxiii

Inversion iMax-Min (5.4 in Table 5.1) for the Max-Min normalization method. Initial data according to Table 2.1, third attribute . . . . . .. . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . .. . . . . . . . . .. . . . . . . . . . .. . . . Inversion idSum (5.5 in Table 5.1) for the dSum normalization method. Initial data according to Table 2.1, third attribute . . . . . . Inversion iZ (5.5 in Table 5.1) for the Z-score normalization method. Initial data according to Table 2.1, third attribute . . . . . . Domain displacement and data compression for different pairs of normalization method–inversion method. Initial data according to Table 2.1, third attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphical illustration of the Reverse Sorting algorithm . . . . . . . . . . Step by step illustration of the Reverse Sorting algorithm . . . . . . . Comparative illustration of inverse transformations according to Table. 5.1 and inversions using the ReS-algorithm. Initial data according to Table 2.1, third attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rank reversal during normalization due to local priorities of alternatives. Same weights. SAW method of aggregation . . . . . . . The relative position of the domains of the normalized values and the decision matrix for which the I-rank alternatives are the same for the 5 main linear normalization methods. TOPSIS aggregation method. Equal criteria weights . . . . . . . . . . . . . . . . . . . . . . . The relative position of the domains of normalized values and the ranking of alternatives for the decision matrix D0 using 30 “aggregation-normalization” methods. Weak sensitivity of the problem to local priorities of alternatives . .. . . . . .. . . . .. . . . .. . . . .. . . The relative position of the domains of the normalized values and the decision matrix for which the I-rank alternatives are different for the 5 basic linear normalization methods. TOPSIS aggregation method. Equal criteria weights . . . . . . . . . . . . . . . . . . . . . . . Mutual arrangement of domains of normalized values and ranking of alternatives for decision matrix D1 using 30 “aggregation method-normalization method” models. High sensitivity of the problem to local priorities of alternatives . . . . . . Normalized values and relative position of domains of five different attributes relative to each other for basic linear normalization methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Step-by-step IZ transformation of normalized values. Decision matrix D0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An illustration of data normalization using the IZ-method for various choices of fixed domain boundaries. (1): I = min(min (V)), (2): I = max(min(V)), Z = max(max(V)). Input data: decision matrix D0 [8×5] .. . .. . .. . . .. . .. . .. . .. . .. . .. . . .. . .. . .. . .. . .. . Generalization of the IZ-method of normalization for the case of variable boundaries of domains of various attributes (a), (c). Input data: decision matrix D0 [8×5] . .. . .. .. . .. .. . .. .. . .. .. . .. .. . ..

102 103 103

104 105 106

107 118

121

122

123

124

132 135

139

141

xxiv

Fig. 7.5

Fig. 7.6

Fig. 7.7

Fig. 8.1 Fig. 8.2 Fig. 8.3 Fig. 8.4

Fig. 8.5 Fig. 8.6 Fig. 9.1 Fig. 9.2

Fig. 9.3 Fig. 9.4 Fig. 9.5 Fig. 9.6 Fig. 9.7 Fig. 9.8 Fig. 9.9 Fig. 9.10

Fig. 9.11 Fig. 9.12 Fig. 10.1

List of Figures

Rank reversal for a different choice of the region of transformation [I, Z]. COPRAS method, decision matrix D1 by Eq. (7.25) . .. . . .. . . .. . . . .. . . .. . . . .. . . .. . . .. . . . .. . . .. . . .. . . . .. . . .. . . .. . . Rank reversal for a different choice of the region of transformation [I, Z]. WPM aggregation method, decision matrix D2 by Eq. (7.26) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rank reversal for different selection of the region of transformation [I, Z]. WASPAS method, decision matrix D3 by Eq. (7.28) . .. . . .. . . .. . . . .. . . .. . . . .. . . .. . . .. . . . .. . . .. . . .. . . . .. . . .. . . .. . . Standardized values. Input data: decision matrix D0 [8×5] from Table 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Step-by-step MS-transformation for Z-score normalized values. Decision matrix D0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Step-by-step MS-transformation for mIQR normalized values. Decision matrix D0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MS-transformations for the Z-score of normalized values for various choices of fixed boundaries of the [I, Z] domain. (3): I = mean(min(V)), Z = mean(max(V)). Decision matrix D0 . . . . . . . . . Rank reversal after MS-transformation. WPM method. Decision matrix D1 by Eq. (8.19) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rank reversal after MS-transformation. WASPAS method. Decision matrix D2 by Eq. (8.21) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An example of transformation-normalization for the set of values a = (12, 16, 21, 65, 120). One-dimensional case . . . . . . . . . . . . . . . . . Relative changes in the position of domains and normalized values after data pre-processing during multivariate normalization .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. .. . .. .. . .. .. . .. .. . .. Changes in dispositions in domains after data pre-processing during multivariate normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Various options for transformation functions in post-processing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Various transformation options for post-processing data based on piecewise linear functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Various transformation options for post-processing data based on S-shaped functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Different transformation variants for Z-score . . . . . . . . . . . . . . . . . . . . . Normalization using hyperbolic tangent . . . . . . . . . . . . . . . . . . . . . . . . . . . Various variants of inverse transformation for Z-score . . . . . . . . . . Non-linear normalization of Z-scores (a) and subsequent IZ transformation (b) of normalized values into the region [0.33, 0.75] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Non-linear transformation 1 of Z-scores . . . . . . . . . . . . . . . . .. . . . . . . . . . Non-linear transformation 2 of Z-scores . . . . . . . . . . . . . . . . .. . . . . . . . . . Configuration of normalized value domains for various linear multivariate normalization methods . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . .

143

144

144 153 156 156

159 162 163 172

174 174 178 180 182 183 185 185

188 192 193 197

List of Figures

Fig. 10.2 Fig. 10.3 Fig. 10.4

Fig. 10.5

Fig. 10.6 Fig. 10.7

Fig. 10.8 Fig. 10.9 Fig. 10.10 Fig. 10.11 Fig. 10.12 Fig. 10.13 Fig. 10.14

Fig. 10.15 Fig. 11.1

Fig. 11.2

Fig. 11.3 Fig. 11.4 Fig. 12.1 Fig. 12.2

xxv

Illustration of different variants of target-based normalization methods .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . Illustration of ReS transformation by Eq. (10.10) for target normalization (Eq. 10.8) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Illustration of IZ transformation for target normalization (Eq. 10.6)—(a)–(d), and bias for target normalization (Eq. 10.9)— (e)–(h) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An illustration of the agreement of the generalized method of normalization of t-criteria with the methods of normalization Max, Sum, Vec, dSum, Max-Min, Z-score (maximization) . . . . . Inversion of t-criteria (minimization) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relative position of the domain of normalized values of the target nominal criterion (the third attribute in Fig. 10.1) with target normalization (Eqs. 10.5–10.7) and for various variants of linear and target nominal normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . One-sided DF for LTB and STB case: Gompertz-curve . . . . . . . . . Graphical interpretation of the argument r in the formula (10.36). Two-sided DF for NTB case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-sided DF for the NTB maximization case . . . . . . . . . . . . . . . . . . . Consistent DF-normalization for the maximization case . . . . . . . . . Consistent Max-Min normalization for the maximization case . . The position of domains when using DF-normalization and Max-Min normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Illustration of the desirability function (b, c, d, magenta) for various choices of specification limits L, U with respect to the largest and smallest feature values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The target normalization in the form of a Gaussian function . . . . Normalized values of D0 matrix for Max, Sum, Vec, and dSum normalization methods, and after and applying IZ and MS transformation .. . .. . .. . . .. . .. . .. . .. . .. . .. . .. . .. . . .. . .. . .. . .. . .. . .. . .. . Normalized values of matrix D0 for Max-Min, Z[0,1], mIQR [0,1], mMAD[0,1] normalization methods and non-linear methods PwL[0,1], SSp[0,1], Sgm[0,1], Sgm(Z), Sgm(IQR) . . . . Histogram of the ranks of alternatives (D0). 238 MCDM models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Histogram of the ranks of alternatives (D1). 238 MCDM models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Histogram of the ranks of alternatives (D1). 231 MCDM models. dQc = 5% . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Histograms of the values of the performance indicator of alternatives (Q) of I–III ranks for various error values of the attributes (δ) of the alternatives aij. SAW method. 1024 variations of the decision matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

200 202

203

205 206

207 210 212 212 214 214 214

216 218

227

228 234 243 251

258

xxvi

Fig. 12.3

Fig. 12.4

Fig. 12.5

Fig. 12.6

Fig. 12.7

Fig. 12.8

Fig. 12.9

Fig. 12.10

Fig. 12.11

Fig. 12.12

List of Figures

Histograms of the values of the performance indicator of alternatives (Q) of I–III ranks for various error values of the attributes (δ) of the alternatives aij. GRAt-method. 1024 variations of the decision matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranking of alternatives based on a simple comparison of the rating of alternatives while varying the decision matrix for different aggregation methods. Distribution of the performance indicator of alternatives with variations in the decision matrix (δ*, j = 2%, N = 1024). Fraction 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranking of alternatives based on a simple comparison of the rating of alternatives while varying the decision matrix for different aggregation methods. Distribution of the performance indicator of alternatives with variations in the decision matrix (δ*, j = 2%, N = 1024). Fraction 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranking of alternatives based on a simple comparison of the rating of alternatives while varying the decision matrix for different aggregation methods. Distribution of the performance indicator of alternatives with variations in the decision matrix (δ*, j = 2%, N = 1024). Normalized values of the performance indicator of alternatives. Fraction 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranking of alternatives based on a simple comparison of the rating of alternatives while varying the decision matrix for different aggregation methods. Distribution of the performance indicator of alternatives with variations in the decision matrix (δ*, j = 2%, N = 1024). Normalized values of the performance indicator of alternatives. Fraction 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing the interval of distinguishability of performance indicators for alternatives of I–III ranks for various aggregating methods with variations in the decision matrix δ. N = 1024 . . . . Change in the relative error of the performance indicator for alternatives of I–III ranks for various aggregating methods, depending on the relative error of the values of the decision matrix. N = 1024. Fraction 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Change in the relative error of the performance indicator for alternatives of I–III ranks for various aggregating methods, depending on the relative error of the values of the decision matrix. N = 1024. Fraction 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distinguishability and priorities of alternatives of I–III ranks for various aggregation methods depending on variations in the decision matrix. N = 1024. Fraction 1 .. . .. . .. .. . .. .. . .. . .. .. . .. . .. Distinguishability and priorities of alternatives of I–III ranks for various aggregation methods depending on variations in the decision matrix. N = 1024. Fraction 2 .. . .. . .. .. . .. .. . .. . .. .. . .. . ..

259

261

262

264

265

266

267

268

269

270

List of Tables

Table 2.1 Table 2.2 Table 2.3 Table 2.4 Table 3.1 Table 3.2 Table 3.3 Table 4.1 Table 4.2 Table 5.1 Table 6.1 Table 6.2 Table 6.3 Table 6.4 Table 6.5 Table 7.1 Table 7.2 Table 7.3 Table 8.1

Decision matrix D0 [8 × 5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A typical structure multiple-criteria decision-making problem . . . Preference functions for PROMETHEE method . . . . . . . . . . . . . . . . Design of the MCDM model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic linear methods for the multidimensional normalization of the decision matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Values of indicators of efficiency of alternatives of I–III ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Values of indicators of efficiency of alternatives of I–III ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic linear methods for the multidimensional normalization of the decision matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decision matrix D0 [8 × 5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Normalization method for cost criteria related with basic linear normalization method for benefit criteria . . . . . . . . . . . . . . . . . . . . . . . . Contribution of criteria to the performance indicator of alternatives of the first rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relative rating gap dQ for different normalization methods . . . Values of indicators of efficiency of alternatives of I–III ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The values of the performance indicators of alternatives of I–III ranks for the example in Fig. 6.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pair correlation matrix of features for decision matrices D0 and D1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranks of alternatives and relative ranking gap . . . . . . . . . . . . . . . . . . Ranks of alternatives and relative rating gap. WPM method . . Ranks of alternatives and relative rating gap. WASPAS method . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . .. . . . . . .. . . . . .. . . Ranks of alternatives and relative rating gap when Z-score and mIQR are normalized. SAW and TOPSIS methods. Equal weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16 17 32 36 48 66 67 73 75 97 119 120 122 124 125 143 145 145

154 xxvii

xxviii

Table 8.2

Table 8.3

Table 8.4

Table 9.1 Table 9.2 Table 9.3 Table 9.4 Table 9.5 Table 10.1 Table 10.2 Table 11.1 Table 11.2 Table 11.3 Table 11.4 Table 11.5 Table 11.6 Table 11.7 Table 11.8 Table 11.9 Table 11.10 Table 11.11 Table 11.12 Table 11.13

List of Tables

Ranks of alternatives and relative rating gap after MS-transformation of Z-scores. SAW and TOPSIS aggregation methods. Equal weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranks of alternatives and relative gap of rating at MS-transformation. Decision matrix D1. WPM method, equal weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranks of alternatives and relative gap of rating at MS-transformation. Decision matrix D2. WASPAS method, equal weights . . .. .. . .. .. . .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. . .. Skewness coefficient changes during logarithmic and powerlaw transformations of initial data . . . . . .. . . . . . . . . . .. . . . . . . . . .. . . . . Skewness coefficient changes during the transformation of the original data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changes in rating when using the logarithmic transformation . .. . . . .. . . .. . . .. . . .. . . .. . . .. . . . .. . . .. . . .. . . .. . . .. . . .. . Changes in the proportions of normalized values during logarithmic and power transformation of the initial data . . . . . . . Changes in proportions of normalized values during logarithmic transformation of the initial data . . . . . .. . . . . .. . . . . .. . The decision matrix D0 and LTB, STB, and NTB criteria . . . . . Result of ranking of the alternatives. SAW method . . . . . . . . . . . . Linear methods of multidimensional normalization with range (0, 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear methods of multidimensional normalization with range [0, 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IZ-method of transformation in the domain of normalized values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MS-method of transformation in the domain of normalized values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Non-linear methods of multidimensional normalization with range [0, 1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decision matrix D0 [8×5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decision matrix D1 [8×5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranking results (D0). Aggregation methods based on additivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranking results (D0). Aggregation methods based on distances to a critical link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranking results (D0). Aggregation methods based on distances to a critical link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary results of the ranking of alternatives (D0). Outranking methods: PROMETHEE & ORESTE . . . . . . . . . . . . . . Summary results of ranking alternatives (D0) based on 238 MCDM models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Borda count by aggregation method. Ranking alternatives for (D0) based on 238 MCDM models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

160

162

164 173 175 187 190 190 213 215 223 223 224 224 225 226 226 229 231 233 234 234 235

List of Tables

Table 11.14 Table 11.15 Table 11.16 Table 11.17 Table 11.18 Table 11.19 Table 11.20 Table 11.21 Table 12.1 Table 12.2

Table 12.3

Table 12.4

xxix

Relative rating gap dQ for various normalization methods (fragment) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranking results (D1). Aggregation methods based on additivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranking results (D1). Aggregation methods based on distances to a critical link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ranking results (D1). Aggregation methods based on distances to a critical link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary results of the ranking of alternatives (D1).Outranking methods: PROMETHEE & ORESTE . . .. .. . .. . .. .. . .. . .. .. . .. . .. Summary results of ranking of alternatives (D1) based on 238 MCDM models . . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. Borda count by aggregation method. Ranking alternatives for (D1) based on 238 MCDM models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relative rating gap dQ for various normalization methods (fragment). Decision matrix D1 .. . .. . . .. . . .. . .. . . .. . . .. . .. . . .. . .. . Decision matrix D1 [8×5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distinguishability of alternatives of I–III ranks with variation of the decision matrix for different models “Agg–Norm.” 1024 variations. dQ = 5% . . .. . .. . .. .. . .. . .. . .. . .. . .. . .. .. . .. . .. . .. . .. . .. Distinguishability of alternatives of I–III ranks with variation of the decision matrix for different models “Agg–Norm.” 1024 variations. dQi = Φ[δ(aij)] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Change in the index GP of the distinguishability of alternatives at different errors for the variation of the decision matrix. Linear methods of normalization. 1024 variations. dQi = Φ[δ(aij)] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

237 238 240 242 243 243 244 244 250

270

271

272

Chapter 1

Introduction

Keywords Multi-criteria decision making · Multi-attribute decision making · Multicriteria decision analysis · Multi-objective decision making · Multivariate normalization

1.1

The Problem of Multi-criteria Decision-Making

A person in his activity is constantly faced with situations in which he has to make a choice from several options (alternatives, objects). Typical situations of multicriteria choice: • consumer choice—the choice of goods and services from a specific set of available alternatives, characterized by a set of specific properties, • social choice—choice of place of study, profession, work, life partner, election of deputies to governing bodies, • managerial choice—personnel formation, selection of the optimal management structure, specific business and economic decisions, strategy for the development of managed units, • engineering solutions—selection of the best design or design solutions, • economic decisions—the choice of objects for investment, the optimal economic program, etc. • political decisions—the choice of a policy of interaction with various social institutions, political parties, countries, etc. The above list of practical tasks of choice can be extended to any area of human activity. Regardless of the nature, it is possible to identify common features inherent in any selection problem. First of all, a set of solutions (options, objects, alternatives) must be given, from which the choice should be made. Secondly, in addition to the name in the message about the object, you can list in detail its features: properties, actions, behavior, states. The features of an object allow answering the questions: “How can one object differ from another?”, “What can change for an object when an action is performed?”. For example, when choosing a car, alternatives may differ from each © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3_1

1

2

1 Introduction

other in terms of technical characteristics, cost, design, ergonomics, etc. You must specify a description. The set of features (properties, attributes) of each alternative is determined based on a set of criteria. The choice of a set of alternatives and a set of criteria is not formalized and can significantly affect the result and consequences. The issues of choosing a set of alternatives and a set of criteria are not considered in this study. Therefore, we assume that the initial set of the choice problem: alternatives (A) and criteria (C) are defined. Alternatives are competing solutions. This means that the properties of one or more alternatives are better for some attribute than for another alternative, and vice versa for some other attribute. Another characteristic feature of multidimensional choice on a discrete set of alternatives is information about objects of various types, measured in various measurement scales, requiring joint processing of information of various types and prioritization. These features make expert or intuitive selection difficult. If there are more than 5–7 alternatives and more than 5–7 features, a large number of complex preference chains arise, the analysis of which requires special methods and algorithms. Multi-criteria decision-making methods provide a means to solve the problem as follows [1–4]: • selection and analysis of alternatives related to the goal, • selection and analysis of criteria describing alternatives, • collection (evaluation) of attributes of alternatives within the framework of the selected criteria, • setting priorities and measuring the importance, weight or value of the values of assessments according to particular criteria when they are generalized or jointly considered on a set of criteria—determining the weighting coefficients of the criteria, • reduction of attribute values measured in different scales to a single dimensionless scale for their subsequent joint transformation—normalization, • selection of a model for aggregating private attributes of alternatives and determining the performance indicator of alternatives, • ranking alternatives and identifying one or more preferred alternatives, • the final choice of the alternative by the decision maker. All of the above decision-making steps affect the result to some extent, depending on the specific task. The choice of a solution consists in indicating among all possible such a solution (or several solutions) that is the best within the framework of the adopted selection procedure. The latter means that for multi-criteria choice problems there is no concept of an absolute optimal solution. The choice is not unambiguous and is determined by a multi-step selection procedure, including informal procedures for choosing normalization methods, estimating the weight of criteria and aggregating attributes, distance metrics, preference functions, and other model parameters. It must also be remembered that this process in many cases takes place in an environment of uncertainty.

1.2

Multidimensional Normalization in the Context of Decision Problems

3

In the absence of optimality criteria, for problems of multi-criteria choice, the phrase of the modern publicist Elchin Safarli (2011) [5] is quite appropriate: “the right choice in reality does not exist—there is only a choice made and its consequences.” However, the progress of recent decades in all spheres of human activity has contributed to the relevance of the problem of evaluating and selecting alternatives according to several criteria for a wide range of tasks of ranking and evaluating technologies, evaluating products, processes and innovative solutions, etc. It is unlikely that someone will apply the theory to solve simple or everyday problems. But in the case of significant risks, such as a threat to human life, man-made disasters, large financial losses, etc., or, conversely, in the case of significant “benefits,” obtaining additional and correct (scientifically based) information to support decision-making is considered necessary. This is the motivation for the development of methods that make it possible to make an “ideal” or close to “ideal” solution. Multi-criteria decision methods (MCDM) is a tool for reducing subjectivity in decision-making by creating a series of selection filters and helping you choose between complex alternatives. They are characterized by a special mathematical apparatus, so the application of different methods to the same problem often leads to different solutions. Multi-criteria decision methods is a branch of the general class of operations research models that is suitable for solving complex problems characterized by a high degree of uncertainty, multiple interests, conflicting goals, various forms of data and information, incommensurability of units. This main class of methods is further divided into multi-objective decision-making (MODM) and multi-attribute decisionmaking (MADM), which differ in the number of alternatives evaluated. In multiobjective decision-making, alternatives are not predetermined, but instead a set of objective functions is optimized given a set of constraints. The study of normalization problems in the book is presented for a class of choice problems on a discrete set of alternatives, i.e. are an integral part of multi-attribute decision-making methods. Following the tradition of using the abbreviation MCMD in numerous publications when solving choice problems on a discrete set of alternatives, the book will also use the notation MCDM without additional reservations.

1.2

Multidimensional Normalization in the Context of Decision Problems

Normalization of multidimensional data is used in problems of multi-criteria decision-making, multivariate classification, etc., in which there are several competing goals and many alternatives (objects), the attributes of which are specified in the context of the selected criteria. One of the approaches for comparing alternatives in the case of two or more features is to determine the resulting integral indicator

4

1 Introduction

obtained as a result of the transformation (reduction) of a certain subset of individual indicators. By the nature of the mapping of the subject area, individual indicators of alternatives can be classified into two main types: extensive, or volumetric, and intensive, or relative. Each feature is defined by a certain value and the value that it takes. Signs are measured in a wide variety of measurement scales: nominal, ordinal, and metric. Signs measured in metric scales have a wide variety of units, scale, reference points, and variation intervals. The values of features obey the most diverse distribution laws, sometimes very far from normal or uniform. And this is not a complete list of possible situations. Taking into account the different nature of the features it is necessary to preliminarily transform the values of all features so that they fall into comparable intervals. This procedure is the normalization procedure. Under multidimensional normalization, we define the procedure for bringing values measured in different scales to a conditionally common scale. Creating shifted and scaled normalized values allows you to compare the corresponding values for different datasets, which is relevant for decision-making, grouping, and classification problems. Algorithms for translating features into normalized scales can be different. In statistical data processing, a linear transformation of all feature values is widely used in such a way that the feature values fall within comparable intervals: r ij =

aij - aj : kj

where aij, rij are the natural and normalized values of the jth attribute of the ith alternative, respectively, aj* and kj are some pre-assigned numbers, which we will call characteristic scales. These numbers can be determined based on the statistical characteristics of the distribution of empirical samples (normalization by statistics), or given by some a priori considerations (normalization by standards). The “standards” can be background or critical values of the indicator, the best and worst “favorable” values and other estimates lexically related to the problem of analyzing critical or allowable loads. Then, these estimates have a subject interpretation. In a multidimensional data cloud, there are several scales of normalization according to statistics, when the variation series of each selected indicator is transformed using sample statistical characteristics. This is the geometric center of a multidimensional data point cloud (aj*), defined, for example, as the average value of all features, or a characteristic value and spread in the data cloud (kj), defined, for example, as a standard deviation or a range that characterizes the maximum spread. Such a normalization of all features leads to the fact that the entire data cloud is enclosed in a ball of unit radius. Since the attributes of objects and the ranges of their values are very different from each other, each of the features uses its own scale, i.e. private statistics aj* and kj. In this case, the normalizations are not “isotropic,” that is, they compress the data

1.2

Multidimensional Normalization in the Context of Decision Problems

5

cloud more strongly in some directions and less in others. However, despite some violation of the data structure (mutual distances), this approach is considered generally accepted. A natural question arises: which of the normalization formulas is preferable. For example, the most popular linear “minimax” normalization is optimal when the values of the variable densely and evenly fill the interval defined by the empirical range of the data. But such a “straightforward” approach is not always applicable. So, if there are relatively rare outliers in the data, much larger than the typical spread, it is these outliers that will determine the normalization scale. This will lead to the fact that the bulk of the values of the normalized variable will be concentrated near zero. The natural way out of this situation is to use a non-linear functional data transformation for pre-processing. For example, converting using a sigmoid function, or a logarithmic function. However, the converted data may differ significantly from natural values, as the proportions between the values change qualitatively. The preference of different normalizations is relative and is based on the equivalence of measures of similarity and difference. It is considered that two vectors of normalized values obtained by different formulas are equivalent if their components are connected by a monotonically increasing dependence. An example of such a function is a linear transformation, which allows any normalized values to be multiplied, divided, or added with some constant number and the ordering of the source data will not change at all (only the scale of the measurement scale changes). However, such a transformation is not possible when each feature has its own scale, i.e. partial statistics aj* and kj. This is the peculiarity of multivariate normalization. The normalizations are not “isotropic.” Figure 1.1 shows an anisotropic feature scaling in three dimensions (n = 3) for some set of 8 alternatives, performed using six linear multivariate normalization methods (Max, Sum, Vec, Max-Min, dSum, MS, see Sect. 3.2). Each point in the figure represents one of eight alternatives. The threedimensional coordinates of these points are the normalized feature values for various linear normalization methods. The point highlighted in “magenta” color defines the best alternative obtained by the Simple Additive Weighting (SAW) method with the same values of attribute weights. The figure shows that in addition to the difference in the magnitude of the scaling of feature values, there are violations of mutual distances and positions in the coordinate space, as well as a difference in the rating of alternatives. In particular, the highlighted “magenta,” the best alternative for the Max, Sum, Vec methods, is different from those for the Max-Min, dSum, and MS-methods. In some cases, the ratios between the normalized values of different features for different normalization methods are such that this can lead to significant differences in the ranking. In Sect. 3.4, in particular, examples of decision problems are given for which rank 1 alternatives are different for 6 linear normalization methods (according to Fig. 1.1) in different MCDM models. In this book, various aspects of the multivariate normalization procedure are considered in the context of rank-based MCDM methods, for which normalization, like all other decision steps, is an important part of decision-making. For rank-based

6

1

Introduction

Fig. 1.1 Illustration of anisotropic feature scaling in three dimensions for some set of 8 alternatives

MCDM methods, the aggregation of the attributes of alternatives and the calculation of the performance score of each alternative are performed based on the normalized data. The alternative efficiency indicator is formed by constructing a utility function that represents various ways of aggregating the normalized values of alternative attributes for each of the criteria. The result of ranking alternatives depends not only on the choice of the normalization method, but also on the combination of the normalization method, the aggregation method, the criteria weight estimation method, and other parameters. Therefore, the structure for decision-making in the book is presented in terms of a rank decision model. For the MCDM rank model, three main approaches are used for aggregating the attributes of various criteria into an indicator of the effectiveness of alternatives, and the aggregation method imposes certain requirements on the normalization of attributes. The first approach is based on the hypothesis of additivity of individual contributions—Value Measurement Methods. These are such popular methods as Weighted Sum Model (WSM) [6] and Weighted Product Model (WPM) [7] and others. In this case, one of the main tasks is to obtain, after normalization, commensurate values of alternative attributes for all criteria, in order to eliminate the priority of individual criteria—the principle of additive significance of alternatives (Hwang and Yoon [1]). If the normalized range (or mean) of one attribute is offset relative to the normalized range (or mean) of another attribute, then the contributions of the two

1.2

Multidimensional Normalization in the Context of Decision Problems

7

criteria to the aggregate measure of alternatives will be different. This means that one of the criteria becomes more significant than the other only as a result of normalization, and such normalization should be recognized as incorrect. Therefore, in order to evaluate alternatives according to several criteria, the natural values of attributes must be normalized in such a way as to find a balance between different dimensions. Let’s designate this problem as a problem of DISPLACEMENT of domains of the normalized values of various attributes concerning each other. Another option for the synthesis of complex indicators is the method of estimating the distance to the critical link—Goal or Reference Level Models. MCDM uses aggregation methods based on the distance from the ideal or anti-ideal of pre-normalized values. These are such methods as Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) [1], VIsekriterijumsko KOmpromisno Rangiranje (in Serbian) (VIKOR) [8], COmbinative Distance-based ASsessment (CODAS) method [9], Gray Relation Analysis (GRA) [10], etc. In this case, the result largely depends on the range of normalized values. The range of values for various attributes for some normalization methods may differ by a multiple. A larger span defines larger distances, and one or more criteria may contribute more to the overall result. Let’s designate this problem as a problem of SCALING of domains of the normalized values of various attributes. Both for the first and for the second variant of the synthesis of complex indicators, the result also largely depends on the distribution of normalized values in the domains. If, for example, in one domain the normalized values are closer to the ideal, and in another domain the normalized values are far from the ideal, then the contribution of these attributes to the complex indicator will be different. Eliminating this situation is not easy, because, for example, with a linear data transformation, the distribution of values does not change (the problem is in the data itself). And the use of non-linear transformations leads to distortion of the original data. Therefore, for the case of the synthesis of complex indicators based on the distance to the critical link, when choosing a normalization method, it is necessary to foresee the consequences. Let’s designate this problem as a problem of ASYMMETRY of normalized values within a domain for different attributes. The third ranking option is Outranking Techniques, represented by a family of methods such as Preference Ranking Organization METHod for Enrichment of Evaluations (PROMETHEE) [11], ELimination Et Choix Traduisant la REalité ELimination (in French) (ELECTRE), [12], Organísation, Rangement Et SynThèse de donnéEs relarionnelles, (in French) (ORESTE) [13], and others. As soon as there is a need for a quantitative assessment when comparing different alternatives, it is necessary to bring the attributes to common scales. This is possible, for example, by means of various preference functions (PROMETHEE) or projection methods (ORESTE) or desirability functions. In essence, such procedures also represent one of the options for normalizing attribute values, since attributes are converted to dimensionless values on the appropriate scales. Another problem arises during non-linear normalization or non-linear transformation of normalized values in the form of a distortion of the relative position of the normalized feature values compared to natural values. In some cases, violation of the

8

1 Introduction

dispositions may lead to a change in the rating. Let’s designate this problem as a problem of violation of DISPOSITIONS of the normalized values in domains of various attributes. One of the features of the ranking of alternatives in the problems of multi-criteria decision-making is that some of the criteria for evaluating alternatives are cost criteria. This means that lower values on the measurement scale for such criteria are preferred. Therefore, for rank-based MCDM methods using the hypothesis of additivity of individual contributions (WSM, etc.), an inverse transformation of natural or normalized values for cost criteria is required. In the context that the normalized values will be further aggregated into a complex indicator of the effectiveness of alternatives, the inverse transformation of the natural or normalized values of the cost criteria is an integral part of the normalization. For a group of aggregation-ranking methods based on the distance from the ideal and anti-ideal (TOPSIS, VIKOR, GRA, etc.), the transformation of cost attributes into profit attributes is not required. The solution is achieved by inverting the ideal by replacing the maximum (best) with the minimum (worst) or vice versa, using linear or non-linear transformations. To transform natural or normalized values of cost criteria, two main inverse transformations are used: 1. reflection relative to zero of the form –r (without offset) or a–r (with displacement); 2. inverse transformation of the form 1/r (non-linear inversion). All other inversions are functions of the underlying transformations. An inversion using a reflection transform causes the domains to move relative to each other. For some cases, the displacement is insignificant, and in some cases of inversion, the displacement becomes significant (antiphase). Inversion using a non-linear transformation of the form 1/r preserves only the monotonicity of the alternatives, but the relative proportions between the attribute values of the alternatives before and after the inversion change. The relative position of alternative attribute values after inversion does not correspond to the original values, which leads to a simple data distortion. Inversion for tasks with mixed criteria is necessary, but consequences in the form of DISPLACEMENT or violation of DISPOSITIONS are possible. Let’s designate this problem as the goal INVERSION problem. A general algorithm for inverting cost attribute values into benefit attributes and vice versa is detailed in Chap. 5. Quite easily solved problems during normalization are the elimination of DISPLACEMENT of domains relative to each other, SCALING of domains and INVERSION of data. Eliminating ASYMMETRY in data and breaking DISPOSITIONS in values are conflicting issues. It is impossible to eliminate the asymmetry without violating the mutual position of the attribute values. Summarizing what has been said, the choice of the method of multidimensional normalization of attribute values must be carried out taking into account the elimination of possible problems.

1.2

Multidimensional Normalization in the Context of Decision Problems

9

Despite significant progress, decision theory is semi-empirical. This also applies to the stage of multivariate data normalization. Many stages and steps are subjective, there are no criteria for choosing methods and model parameters. For example, according to [14], in the absence of performance criteria, an important indicator for applied aspects of decision-making is the consistency of the ranking results obtained on the basis of various models in the form: Condition 1: Alternative models should generate performance measures which have similar distributional properties such as means, standard deviations, minimum and maximum values. Condition 2: Alternative models should identify mostly the same decisions as the “best performers” and as the “worst performers.” Condition 3: Alternative models should rank the objects mostly in the same order. Condition 4: Alternative models should generate the same performance scores for objects. The terms “similar,” “mostly,” and “same” in the wording of the conditions emphasize the difference in the results when applying different methods of normalization despite small differences due, for linear methods, to the difference in the coefficients of bias and data compression (aj*, kj), and for non-linear methods, the differences are due to the shape of the line. Currently, about 10 linear and non-linear methods of data normalization are actively used in the field of MCDM/MCDA [15–17]. Depending on the normalization method used, the values of the aggregate performance indicator of alternatives can vary significantly. The difference is not only in absolute values, but also in relative ones, which changes the ranks of alternatives within the considered aggregation method [17–23]. The absence of criteria for choosing a normalization method leads to the fact that the effectiveness of different normalization methods is largely based on empirical studies that compare results for a limited number of aggregation methods [24]. Comparative research reveals the pros and cons of each normalization method and provides important insights for decision-makers. However, comparative empirical research is largely limited by the applied task, the objectives of the research, and the availability of the necessary tools [17–23]. One of the significant publications on the problem of normalization for MCDM problems [15], with extensive citation, reflecting most of the methods and techniques of normalization, summarizes that “. . . normalization methods turn out to be only minor variations of each other, these small differences can have important consequences for the quality of decisionmaking.” Within the framework of existing normalization methods, there is no unambiguous solution when converting measurement scales, there are no transformation algorithms in the presence of asymmetry in attribute values, there is no unambiguous solution when converting cost criteria into benefit criteria. There are no performance criteria for linear and non-linear normalization methods. The main conclusions on the problem of normalization and the impact on ranking based on the results of empirical studies are as follows:

10

1

Introduction

1. normalization methods turn out to be only minor variations of each other, these small differences can have important consequences for the quality of decisionmaking (Jahan & Edwards, 2015) [15], 2. linear normalization methods cannot significantly affect the rank of alternatives. Non-linear norms can lead to some deviations, mainly for alternatives that are inherently close (Milani et al., 2005) [18], 3. the ranking of alternatives can change with valid transformations of the initial values of the attribute (Aouadni et al., 2017) [25], 4. the profile of the aggregation function for linear transformations does not change, and for non-linear transformations, the profile may change, from convex to concave and vice versa, which can lead to erroneous decisions (Peldschus, 2018) [26]. These conclusions are general enough to be the basis for a multivariate data normalization procedure. In a number of cases with a high sensitivity of the problem to the estimates of the decision matrix, in some cases not. As a consequence, for the sake of safer solutions, a simple multivariate strategy using both linear and non-linear norms is recommended. The best (in terms of distinguishability of ranking) normalization methods are distinguished. It is also recommended that you always run a normalization study for each criterion. Thus, despite the importance of normalization for the analysis of multidimensional data in such areas as classification, clustering, multivariate data analysis, multi-criteria decision analysis, operation research, optimization, expert systems, artificial intelligence, etc., the choice of the correct normalization method is not formalized. There are no performance estimates and no criteria for choosing a normalization method. This study is devoted to the problem of normalization of multidimensional data and related issues in the problems of making multi-criteria decisions and multidimensional classification. The monograph presents a systematic review of multivariate normalization methods, a presentation of problems when using various methods and ways to eliminate these problems. This is, first of all, the shift of the domains of normalized values of various indicators during normalization and inversion of data relative to each other, and the natural asymmetry of data in the domain. The research methodology is based on the basic properties of linear normalization methods, and the choice of the normalization method is based on the basic principles. The invariant properties of linear normalization methods formulated by the author make it possible to eliminate simple problems and avoid obvious errors when choosing a normalization method. New original methods for transforming normalized values are proposed: – inversion of normalized values of cost attributes into profit attributes based on the reverse sorting algorithm (ReS-algorithm); – method of transformation of normalized values, excluding the displacement of the area of normalized values of attributes (IZ-method);

1.2

Multidimensional Normalization in the Context of Decision Problems

11

– a method for transforming normalized Z-score values into the area [0, 1], in which the mean values and variances of all attributes are the same (MS-method). ReS-algorithm, IZ-method, and MS-method keep the data informative after normalization and eliminate the shift of the area of normalized values for various criteria. The examples set out the methodology for evaluating the effectiveness and choosing the method of multivariate normalization for solving problems of decision-making by many criteria; methodology for sensitivity analysis of ranking alternatives in the context of normalization. Chapter 2 presents the MCDM rank model and describes the main methods for aggregating the attributes of alternatives and methods for assessing the importance of criteria. In the absence of performance criteria, the design of the MCDM model is constructed based on a combination of various methods. The third chapter presents the main problems in the normalization of multivariate data and formulates the general principles of multivariate normalization. The fourth chapter presents an overview of the main linear methods for normalizing multivariate data used for the rank model of multi-criteria decision-making. Some important invariant properties of linear normalization methods are determined: preservation of dispositions of natural and normalized values, invariance of re-normalization, invariance of ranking of alternatives under a general linear transformation of the decision matrix. Using the invariant properties of linear normalization methods often eliminates simple problems and avoids obvious errors when solving MCDM problems. Chapter 5 presents the Reverse Sorting Algorithm (ReS-algorithm), which is universal for all normalization methods. The ReS-algorithm converts cost attributes to benefit attributes and vice versa. The idea of such an algorithm is based on the inversion of the measurement scale of cost criteria after normalization without changing the normalization area. Particular variants of the ReS-algorithm are the inversion for the linear Max-Min normalization method and the inverse transformation for the Max normalization method Markovič [27]. The ReS-algorithm eliminates the shift in the area of normalized values of the cost criteria relative to the benefit criteria. The sixth chapter identifies the main factors that determine the change in rank in MCDM problems and discusses the change in the rank of alternatives due to normalization. The relative preference of various normalizations is shown. The seventh chapter presents a method for transforming the normalized values of the decision matrix—the IZ-method. The IZ-method allows eliminating the displacement in the area of normalized values of alternatives attributes for all criteria, which eliminates the priority of the contributions of individual criteria to the alternatives efficiency indicator. The idea of the method is based on the alignment of the upper and lower levels of alternative attributes for all criteria while maintaining the dispositions of natural and normalized values. A particular variant of the IZ-method is the Max-Min normalization method.

12

1

Introduction

Chapter 8 proposes a method for transforming normalized Z-scores into the region [0, 1], in which the means and variances of all attributes are the same (MS-method). The MS-method allows you to equalize the mean values and variances of attributes of various criteria. The method preserves the dispositions of natural and normalized values of the attributes of alternatives and excludes the priority of the average contribution of individual criteria in the indicator of the effectiveness of alternatives. Therefore, the MS-method is relevant for data with asymmetry. The ninth chapter presents non-linear methods of multivariate normalization in two versions: data pre-processing and data post-processing. The use of non-linear methods is discussed in the context of removing asymmetries in the distribution of features. The possibility of consistent application of linear and non-linear methods using the IZ transformation is shown. Chapter 10 presents normalization methods for the “Nominal value the best” case. A generalization of methods for normalizing target nominal criteria for the linear case is given. Normalization of target nominal criteria using non-linear methods was performed using the concept of Harrington’s desirability function. Chapter 11 presents a comparative analysis of the ranking of alternatives when applying various normalization methods based on a numerical experiment. Calculations and analysis were performed for two problems of multi-criteria choice under conditions, weak and strong sensitivity of the rating to the choice of normalization methods. The ranking analysis was performed based on the results of calculations of 238 different rank models, combining 13 aggregation methods and 21 different normalization methods, all other things being equal. Chapter 12 discusses the problem of distinguishability of alternatives in a situation where the ratings of alternatives are approximately equal, the ratings are sensitive to the initial data, to the choice of normalization methods, and other model parameters. Indicators for comparing ratings are proposed and numerical algorithms are proposed for estimating the magnitude of the relative error, which determine a significant difference in ratings with variations in the initial data and variations in normalization methods. Based on such an analysis, it is possible to determine the aggregation methods that have the best ranking resolution. All calculations, graphics, and numerical statistical experiments were performed in MatLab. Description of procedures and functions that implement normalization is given in Appendix A.

References 1. Hwang, C. L., & Yoon, K. (1981). Multiple attributes decision making: Methods and applications. A state-of-the-art survey. Springer. 2. Triantaphyllou, E. (2000). Multi-criteria decision making methods: A comparative study. Springer. 3. Tzeng, G.-H., & Huang, J.-J. (2011). Multiple attribute decision making: Methods and application. Chapman and Hall/CRC.

References

13

4. Greco, S. (2005). Multiple criteria decision analysis: State of the art surveys. Springer. 5. Safarly, E. (2011). Mne tebya obeshali [You were promised to me]. Moscow: AST [in Russian]. 6. Fishburn, P. C. (1967). Additive utilities with incomplete product sets: Application to priorities and assignments. Operations Research, 15, 537–542. 7. Miller, D. W., & Starr, M. K. (1969). Executive decisions and operations research. PrenticeHall. 8. Opricovic, S. (1998). Multicriteria optimization of civil engineering systems. PhD Thesis, Faculty of Civil Engineering, Belgrade. 9. Ghorabaee, M. K., Zavadskas, E. K., Turskis, Z., & Antucheviciene, J. (2016). A new COmbinative Distance-based ASsessment (CODAS) method for multi criteria decisionmaking. Economic Computation & Economic Cybernetics Studies & Research, 50(3), 25–44. 10. Wang, Q. B., & Peng, A. H. (2010). Developing MCDM approach based on GRA and TOPSIS. In Applied mechanics and materials (Vol. 34–35, pp. 1931–1935). Trans Tech Publications. https://doi.org/10.4028/www.scientific.net/amm.34-35.1931 11. Brans, J. P., Mareschal, B., & Vincke, P. (1986). How to select and how to rank projects: The PROMETHEE method. European Journal of Operational Research, 24(2), 228–238. 12. Roy, B. (1968). Classement et choix en présence de points de vue multiples. RAIRO-Operations Research-Recherche Opérationnelle, 2, 57–75. 13. Chatterjee, P., & Chakraborty, S. (2013). Advanced manufacturing systems selection using ORESTE method. International Journal of Advanced Operations Management, 4, 337–361. https://doi.org/10.1504/IJAOM.2013.058896 14. Bauer, P. W., Allen, N. B., Gary, D. F., & Humphrey, D. B. (1998). Consistency conditions for regulatory analysis of financial institutions: A comparison of frontier efficiency methods. Journal of Economics and Business, 50, 85–114. 15. Jahan, A., & Edwards, K. L. (2015). A state-of-the-art survey on the influence of normalization techniques in ranking: Improving the materials selection process in engineering design. Materials & Design, 65, 335–342. 16. Vafaei, N., Ribeiro, R. A., & Camarinha-Matos, L. M. (2018). Data normalization techniques in decision making: Case study with TOPSIS method. International Journal of Information Technology & Decision Making, 10(1), 19–38. 17. Aytekin, A. (2021). Comparative analysis of normalization techniques in the context of MCDM problems. Decision Making: Applications in Management and Engineering, 4(2), 1–25. 18. Milani, A. S., Shanian, R., Madoliat, R., & Nemes, J. A. (2005). The effect of normalization norms in multiple attribute decision making models: A case study in gear material selection. Structural and multidisciplinary. Optimization, 29(4), 312–318. 19. Chakraborty, S., & Yeh, C. H. (2007). A simulation based comparative study of normalization procedures in multi-attribute decision making. In Proceedings of the 6th Conference on 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases, 6, 102–109. 20. Chakraborty, S., & Yeh, C. H. (2009). A simulation comparison of normalization procedures for TOPSIS. Proc. of CIE 2009 International Conference on Computers and Industrial Engineering, Troyes, pp. 1815–1820. 21. Chatterjee, P., & Chakraborty, S. (2014). Investigating the effect of normalization norms in flexible manufacturing system selection using multi-criteria decision-making method. Journal of Engineering Science and Technology Review, 7(3), 141–150. 22. Pavličić, D. (2001). Normalization affects the results of MADM methods. Yugoslav Journal of Operations Research, 11(2), 251–265. 23. Stanujkič, D., Đordevič, B., & Đordevič, M. (2013). Comparative analysis of some prominent MCDM methods: A Case of Ranking Serbian Bank. Serbian Journal of Management, 8(2), 213–241.

14

1

Introduction

24. Zavadskas, E. K., Ustinovichius, L., Turskis, Z., Peldschus, F., & Messing, D. (2002). LEVI 3.0 – Multiple criteria evaluation program for construction solutions. Journal of Civil Engineering and Management, 8(3), 184–191. 25. Aouadni, S., Rebai, A., & Turskis, Z. (2017). The Meaningful Mixed Data TOPSIS (TOPSISMMD) method and its application in supplier selection. Studies in Informatics and Control, 26(3), 353–363. https://doi.org/10.24846/v26i3y201711 26. Peldschus, F. (2018). Recent findings from numerical analysis in multi-criteria decision making. Technological and Economic Development of Economy, 24(4), 1695–1717. https://doi.org/10. 3846/20294913.2017.1356761 27. Markovič, Z. (2010). Modification of TOPSIS method for solving of multicriteria tasks. Yugoslav Journal of Operations Research, 20(1), 117–143.

Chapter 2

The MCDM Rank Model

Abstract A class of MCDM models is considered, in which the ranking of alternatives is performed based on the performance indicators of alternatives obtained by aggregating normalized attribute values. Aggregation of normalized attribute values transforms the original multi-criteria decision-making problem with different-sized and differently directed criteria to a one-dimensional problem of ranking alternatives in descending or ascending integrated performance indicator. The formal structure of the MCDM rank model is given and an overview of the most popular methods for determining the weight coefficients of criteria, methods for aggregating private attributes within the framework of the MCDM rank model is presented. Given the multi-variance of methods and the absence of formalized criteria for their choice, the consistency of the solution for various MCDM models increases the reliability of the solution. Keywords MCDM rank model · Target value of attributes · Attribute weighting · Attribute aggregation techniques

2.1

MCDM Rank Model

Ranked MCDM methods are a multi-step procedure of multidimensional classification or ordering of a set of m alternatives Ai, each of which is characterized by a set of n attributes with respect to the selected criteria Cj [1–4]. Alternatives Ai and criteria Cj are non-formalized linguistic variables. Alternatives are interchangeable objects of the same nature. Let’s define a set of possible (available) alternatives in the form of a list A = {A1, A2,. . ., Am}. As a rule, this is some subset of alternatives available for selection. Each alternative is defined by specifying a multidimensional attribute vector. Object attributes are important in decision-making characteristic features of objects, can be organized in a certain hierarchical structure, have an individual measurement scale, and can be independent or partially dependent. An important point in the process of developing and making decisions is the formation of a set of essential criteria and the identification of relationships between them (independent in terms of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3_2

15

16

2

Table 2.1 Decision matrix D0 [8 × 5] benefit(+)/cost(-) Alternatives A1 A2 A3 A4 A5 A6 A7 A8

Criteria C1 + 6500 5800 4500 5600 4200 5900 4500 6000

C2 + 85 83 71 76 74 80 71 81

The MCDM Rank Model

C3 667 564 478 620 448 610 478 580

C4 + 140 145 150 135 160 163 150 178

C5 1750 2680 1056 1230 1480 1650 1056 2065

utility, preferences, indifference, etc.). In the general case, it is required to determine the priority or importance of a feature in the decision-making procedure by specifying the criteria weights. Criteria are formulated based on the meaning of the proclaimed goal. The accepted set of features is defined as the criteria set of the selection problem and denoted as a list C = {C1, C2,. . ., Cn}, and the properties of each alternative are determined by setting the vector of its attributes Ai = (ai1, ai2,. . ., ain). A characteristic feature of multi-criteria choice is the presence of competing alternatives. This means that when comparing two alternatives, one of them takes precedence over the other only in terms of attributes. The set of attribute values of the ith alternative according to the jth criterion in the selected measurement scale determines the decision matrix DM = {aij}. The formalized part of decision-making is determined by the task and subsequent processing of the decision matrix, which makes it possible to rank alternatives within the framework of the chosen model. Table 2.1 provides input data for a standard decision problem that will be used throughout the book as a base example. Decision matrix, dimensions [8 × 5]—8 alternatives and 5 criteria. Each alternative is defined by a set of 5 attributes in the context of the selected criteria. The third and fifth feature is the cost attribute. This means that smaller characteristic values are preferred when choosing an alternative. The decision-making procedure includes setting weights for each criterion, bringing the values of attributes to a single dimensionless scale—normalization, choosing a model for aggregating attributes of alternatives, and determining the performance indicator of alternatives. A standard decision problem is to determine the best alternative according to some criterion. Table 2.2 shows a typical structure of a multi-criteria decision-making problem on a discrete set of alternatives. The MCDM rank model for each alternative Аi determines the value of Qi—an indicator of efficiency, on the basis of which the ranking of alternatives and subsequent decision-making is carried out:

2.1

MCDM Rank Model

17

Table 2.2 A typical structure multiple-criteria decision-making problem

Alternatives A1 A2 ... Am Criteria weights methods: AHP, DEMATEL, BWM, SWARA, expert methods, Entropy, CRITIC, etc. Normalization methods Distance metric between two n-dimension objects

Criteria C1 C2 Cn a11 a12 a21 a22 ... ... am1 am2

... ... ... ...

a1n a2n ... amn

ω1

...

ωn

ω2

Aggregation methods WSM WPM COPRAS TOPSIS VIKOR GRA

Performance indicator (assessment score) Q1 Q2 ... Qm

Rank R1 R2 ... Rm

‘norm’ = {Max, Sum, Vec, Max-Min, Max2, Log,. . .} ‘dm’ = {Lp-metric, . . .} (L1—City Block; L2—Euclidean, L1—Chebyshev, ...

Other parameters: balancing factor, the preference parameters, the distinguishing coefficient, etc.

f

Ai → Qi ,i = 1, . . . , m Q = f ðA, C, DM, 0 ω0 , 0 norm0 , 0 dm0 Þ Ap  Aq  . . .  Ar  As , p, q, r, s 2 f 1; 2; . . . ; m g:

ð2:1Þ

The MCDM rank model includes the choice of a set of alternatives (A) and a set of criteria (C), an assessment of the values of the attributes of alternatives in the context of each criterion—a decision matrix (DM), a method for estimating the weight of criteria (“w”), normalization method (“norm”) of the decision matrices, selection of a metric for calculating distances in the n-dimensional space of criteria (“dm”), includes aggregation method of the attributes of alternatives ( f ) for calculating the performance indicator (Q) of each alternative (or key performance indicator (KPI), preference score, assessment score). The alternatives performance score Q is constructed by aggregating the attributes of the alternatives in such a way as to take into account the contribution of the attribute of the alternative for each criterion. For example, the WSM weighted sum method (SAW) is: n

Qi =

wj  r ij , j=1

ð2:2Þ

18

2

The MCDM Rank Model

where rij, corresponding to the ranking of ith alternative with respect to jth criterion. This rule is defined as the normalization function Norm(): aij

NormðaÞ

→ rij , or r ij = Norm aij :

ð2:3Þ

Normalized values rij for natural values of attributes aij can be obtained using one of the available normalization methods—linear, non-linear, or expert scales (desirability scale, preference scale). In expression (2.2) wj determines the weights of the criteria or its relative importance in the list of criteria. As a rule, the weights are normalized: n

wj = 1:

ð2:4Þ

j=1

The alternatives are ranked according to the position number in the ordered list in descending or ascending order of efficiency (depending on the aggregation method).

2.2

The Target Value of Attributes

The target value of an attribute in the context of selection tasks can be of three types: (1) Larger-the-better (LTB): larger is better, smaller values are not desirable. Examples of this type are profit, durability, strength, efficiency, etc. (2) Smaller-the-better (STB): a smaller value is better, and higher values are undesirable, such as pollution, fuel consumption, wear, etc. (3) Nominal-the-best (NTB): a better nominal value is a feature with a specific target value that satisfies a customer or technology need. For example, dimensions, viscosity, consistency, clearance, etc. In the first case, the type of criterion is designated as “profit” or “benefit” criteria. The STB value is defined as cost criteria (“cost” criteria). For the target value of NTB, there is no well-established term in the literature. The term target criteria is often used, although all criteria have a purpose. In this book, for the target value of NTB, we will refer to such criteria as target criteria or t-criteria. Attribute aggregation requires agreement on the direction of the criteria. Coordination of the direction of the criteria is achieved by inverting the goal from a minimum to a maximum or vice versa. Target inversion is performed in the normalization step by inverting attribute values. Algorithms for such an inversion are detailed in Chap. 10. The choice of direction for maximizing or minimizing the performance indicator does not affect the ranking result. As a rule, such a choice is determined by the ratio of the number of criteria for which “less is better” or “more is better,” following the principle of reducing the number of algebraic data transformations.

2.3

Significance of Criteria: Multivariate Assessment

19

For the NTB case, where the attribute’s target value is the best, normalization is performed such that the normalized nominal value is the highest for the maximization direction or the lowest for the minimization direction. The attribute values for the “target” criteria are normalized already taking into account the choice made, or the values are inverted. Various normalization procedures for target criteria are detailed in Sect. 10.

2.3

Significance of Criteria: Multivariate Assessment

Decision-making methods based on several criteria form an integral indicator of the effectiveness of alternatives, taking into account the coefficients of the relative importance of particular indicators in achieving the goal. Criteria weights quantify their significance and can significantly affect the outcome of the decision-making process. The weight coefficients of the components of a complex system can be obtained in different ways. One of the generally accepted classifications of criteria weight estimation methods divides methods into three categories: subjective, objective, and integrated or combined approach to weighting [5–7]. A hierarchical classification of methods for determining the coefficients of importance based on the theory of measurements is given in [8]. 19 groups of methods include a significant number of well-known methods for determining the coefficients of importance of criteria. The classification includes 2 classes of methods according to the method of processing information in primary measurement scales or derived scales and is based on the theory of the importance of criteria developed by V. Podinovskii [9– 12]. The determination of the subjective weight is based on the opinions of experts or groups of experts representing the views of various stakeholders. These are such methods as the direct rating (DR) method [13], the point allocation (PA) method [14], the attribute ranking method [15, 16, 17] (Ranking method), and programming methods [18, 19, 20]. The set of methods is based on pairwise comparison of criteria: Analytic Hierarchy Process (AHP) and Eigenvector (EV) method [21], Best-Worst Method (BWM) [22], DEcision-MAking Trial and Evaluation Laboratory (DEMATEL) method [23], Step-wise Weight Assessment Ratio Analysis (SWARA) [24], FUll COnsistency Method (FUCOM) [25], etc. One of the important problems of subjective methods is to assess the consistency of expert opinions. For example, in the analytical hierarchy method (AHP), a consistency index is determined, which increases the reliability of the weight estimation. Other procedures for matching expert estimates are based on statistical methods and correlation. A group of methods under the general name “objective” methods for estimating the weights of criteria uses the information contained in the decision matrix. These are such methods as Entropy Weighting Method (EWM) [26, 27], Criteria Importance Through Inter-criteria Correlation (CRITIC) [28], Standard Deviation (SD) and their modifications [29–32]. For these methods, there is no answer to the

20

2 The MCDM Rank Model

question of how fully and objectively a limited sample of attributes of alternatives describes the value of criteria. Obviously, the result is completely determined by the decision matrix.

2.3.1

Subjective Weighting Methods: Pairwise Comparisons and AHP Process

It is assumed that the reader is familiar with the technique and problems of determining the weight of criteria based on the matrix of paired comparisons within the analytical hierarchy process (AHP) [33]. At the first step, the hierarchical structure of the criteria set is formed. Further, for each hierarchical level, the decision maker first gives linguistic pairwise comparisons of a set of criteria in the selected gradation scale, then receives numerical pairwise comparisons, choosing a certain numerical scale for their quantitative assessment, and, finally, derives a priority vector from numerical pairwise comparisons. The pairwise comparison matrix in AHP determines the relative importance of criteria wi (i = 1, . . ., n):

A = ðaij Þ =

w1 w1 w2 w1 ... wn w1

w1 w2 w2 w2

...

wn w2

...

...

w1 wn w2 wn

:

ð2:5Þ

wn wn

The process of obtaining the priority vector of objects from the numerical matrix of pairwise comparisons is multivariate. There are a large number of methods for processing pairwise comparisons for prioritization [34, 35], among which the most commonly used are the EigenVector Method (EVM) [21] and the Logarithmic Least Squares Method (LLSM) [36], which are presented below. (1) Eigenvector Method is to take as weights the components of the eigenvector of the matrix A corresponding to the largest eigenvalue λmax.  = λmax  w , Aw

ð2:6Þ

This vector is normalized by the Sum method. (2) Logarithmic Least Squares Method uses the L2-metric to determine the objective function of the following optimization problem:

2.3

Significance of Criteria: Multivariate Assessment

21

n

2

ln aij - ln wi - ln wj ,

min

ð2:7Þ

i=1 j>i n

s:t: wi ≥ 0,

wi = 1:

ð2:8Þ

i=1

This solution can be found as the geometric mean of rows [35] and is equivalent to the Geometric Mean Method (GMM): 1=n

n j=1

wi =

2.3.2

aij

n

n

i=1

j=1

1=n

:

ð2:9Þ

aij

Subjective Weighting Methods: Best–Worst Method

The Best–Worst Method (BWM) [22] performs pairwise comparisons of the best and worst criteria compared to other criteria: AB = ðaB1 , aB2 , . . . , aBn Þ, ðaBB = 1Þ—Best - to - Other,

ð2:10Þ

AW = ða1W , a2W , . . . , anW Þ, ðaWW = 1Þ—Other - to - Worst:

ð2:11Þ

Numerical pairwise comparisons in the basic version of the BWM were implemented by the author of the method [22] using the Saaty numerical scale with estimates of quantitative significance from 1 to 9. To determine the weights vector, the following optimization problem is solved: min ξL ,

ð2:12Þ

s:t: wB - aBj wj ≤ ξL , 8j,

ð2:13Þ

wj - ajW wW ≤ ξL , 8j,

ð2:14Þ

n

wj ≥ 0,

wj = 1: j=1

ð2:15Þ

22

2

2.3.3

The MCDM Rank Model

Objective Weighting Methods: Entropy, CRITIC, SD

Entropy Weighting Method (EWM) [26, 27, 37] The values of the decision matrix are transformed into the segment [0, 1] using Max-Min normalization (2.16) with simultaneous inversion (2.17) of cost criteria values: rij = rij =

aij - amin j amax - amin j j - aij amax j amax - amin j j

, r ij 2 ½0; 1,

ð2:16Þ

, r ij 2 ½0; 1:

ð2:17Þ

The intensity ( pij) of the jth attribute of the ith alternative is calculated for each criterion (Sum method): pij =

m

r ij

, 8i = 1, . . . , m, j = 1, . . . , n,

m i=1

pij = 1:

ð2:18Þ

i=1

r ij

To calculate the entropy (ej) and the key indicator (qj) of each criterion: m

ej = -

1 p  ln pij , j = 1, . . . , n;  ln m i = 1 ij

if pij = 0 ) pij  ln pij = 0 , ð2:19Þ

qj = 1 - ej , j = 1, . . . , n:

ð2:20Þ

To calculate the weight of each criterion: n

w j = qj =

qk ,

j = 1, . . . , n:

ð2:21Þ

k=1

The entropy of the attributes of alternatives for each criterion is a measure of the significance of this criterion. It is believed that the lower the entropy of the criterion, the more valuable information the criterion contains.

CRiteria Importance Through Inter-criteria Correlation (CRITIC) [28] The values of the decision matrix are transformed based on the concept of the ideal point. To determine “best” (B = bj) and “worst” (T = tj) solution ([1 × n]-vector) for all attributes and determine relative deviation matrix V[m × n]:

2.3

Significance of Criteria: Multivariate Assessment

r ij =

23

aij - bj : bj - t j

ð2:22Þ

To determine standard deviation (s) ([1 × n]-vector) for colls of V: m

sj = stdðVÞ =

1 ðr - r j Þ2 :  m i = 1 ij

ð2:23Þ

To determine the linear correlation matrix (cjk) ([n × n]-matrix) for colls of V is the (correlation coefficient between the vectors rj and rk): m i=1

cjk = corrðVÞ =

m i=1

ðr ij - r j Þðr ik - r k Þ 2

ðr ij - r j Þ

m i=1

, j, k = 1, . . . , n: ðr ik - r k Þ

ð2:24Þ

2

To calculate the key indicator and weight of criteria by Eq. (2.21): n

qj = s j 

1 - cjk , j = 1, . . . , n:

ð2:25Þ

k=1

In the CRITIC method, the standard deviation sj is a measure of the significance of this criterion. Allowance for the relationship between the criteria is determined through the correlation matrix, which allows you to distribute the weight between the correlated criteria through the coefficients of reduction (1–c) [32]. The amount shown in formula (2.25) is a measure of the conflict created by jth criterion in relation to the rest of the criteria. Finally, the amount of information contained in the jth criterion is determined using multiplicative aggregation of measures by Eq. (2.25). The Spearman rank correlation coefficient could be used instead of cjk, in order to provide a more general measure of the relationship between the rank orders of the elements included in the vectors rj and rk.

Standard Deviation (SD) The values of the decision matrix are transformed into the segment [0, 1] using Max-Min normalization with simultaneous inversion of cost criteria values by Eqs. (2.16) and (2.17). To calculate the key indicator and weight of criteria by Eq. (2.21):

24

2

The MCDM Rank Model

m

qj = sj = stdðr ij Þ =

1 ðr - r j Þ2 :  m i = 1 ij

ð2:26Þ

The standard deviation of rj is a measure of the value of that criterion to the decision-making process. Currently, there are no criteria for the preference of weighing methods. A variety of methods for estimating the weight of criteria determine the additional uncertainty of the rank model (2.1). For practical purposes, it is recommended to determine the degree of difference in the estimates obtained by different methods, although it is difficult to find a single basis for comparing the results due to the different bases for constructing the methods.

2.4

Aggregation of the Attributes: An Overview of Some Methods

One of the options for solving the problem of choice on a finite set of alternatives is the ranking of alternatives based on the integral performance indicator [1–4]. Aggregation of normalized attribute values in the framework of the rank model (2.1) transforms the original multi-criteria decision-making problem with different-sized and differently directed criteria to a one-dimensional problem of ranking alternatives in descending or ascending order of the integral performance index Qi. According to the classification of the authors [38], MCDM methods can be divided into three groups: (G1) Value Measurement Methods, such as SAW (Simple Additive Weighting) [1] and WASPAS (Weighted Aggregated Sum Product Assessment) [39], (G2) Goal or Reference Level Models, such as TOPSIS (Technique for Order Performance by Similarity to Ideal Solution)) [1] and VIKOR (VIse Kriterijumska Optimizacija kompromisno Resenje, in Serbian, Multiple-Criteria Optimization Compromise Solution) [40], (G3) Outranking Techniques, such as PROMETHEE (Preference Ranking Organization METHod for Enrichment of Evaluations) [41], ELECTRE (ELimination Et Choix Traduisant la REalité, in French, ELimination and Choice Expressing the Reality) [42], ORESTE (Organísation, Rangement Et SynThèse de donnéEs relarionnelles, in French, Organization, Arrangement and Synthesis of Relational Data) [43, 44]. Numerous solutions to various problems presented in this book are made using the methods of all three groups: SAW (WSM), WPM, WASPAS, COPRAS, TOPSIS, VIKOR, GRA, PROMETHEE-II, ORESTE. The step-by-step algorithm of all aggregation procedures is made in the MatLab system.

2.4

Aggregation of the Attributes: An Overview of Some Methods

2.4.1

25

Value Measurement Methods

Simple Additive Weighting (SAW) or Weighted Sum Method (WSM) [1] Performance indicator Qi of the ith alternative was determined as the entire standardized estimations of the attributes rij with the weight wj of the jth criteria: n

Qi =

wj  r ij ,

ð2:27Þ

j=1 n

where j=1

wj = 1:

SAW does not limit the choice of normalization method. Aggregation requires a mandatory inversion of the normalized values for the cost criteria.

Weighted Product Method (WPM) [39] Performance indicator Qi of the ith alternative was determined as: n

Qi =

r ij

wj

ð2:28Þ

,

j=1

Attribute aggregation using WPM is non-linear. In accordance with the domain of the function (2.3), the domain of normalized values is a subset of (0, 1], i.e., the normalized values are not negative. For aggregation, a mandatory inversion of normalized values for cost criteria is required.

Weighted Aggregated Sum Product Assessment (WASPAS) [39] WASPAS is a mixture in proportion λ and (1-λ) between the Weighted Sum Method and the Weighted Product Method: n

Qi = λ 

n

wj  r ij þ ð1 - λÞ  j=1

r ij j=1

wj

,

ð2:29Þ

26

2

The MCDM Rank Model

Multi-Attributive Border Approximation Area Comparison (MABAC) [45] The efficiency indicator of alternatives in the MABAC method is formed as the algebraic sum of the deviations of normalized values displaced by one with a weight from the geometric mean: n

Qi =

vij - gj ,

ð2:30Þ

j=1

where 1=m

m

vij = rij þ 1  ωj ; gj =

, i = 1, . . . , m; j = 1, . . . , n:

vij i=1

The best alternative is the one with the highest Qi score.

Complex Proportional Assessment (COPRAS) Method [46] The aggregation method uses the construction of a performance indicator of alternatives based on the function of the two arguments S+i and S-i: m

S-i

Qi = Sþi þ

i=1 m

S-i 

i=1

,

ð2:31Þ

1=S - i

where n

Sþi = j=1

wj  r ij jfor j 2 C þ j , S-i =

n j=1

wj  r ij jfor j 2 Cj- :

ð2:32Þ

The COPRAS attribute aggregation function is non-linear in cost attributes.

2.4.2

Goal or Reference Level Models

Another group of aggregation methods is based on the concept of distance between data units (TOPSIS, GRA). When the variables in a multivariate dataset are on different scales, it makes sense to calculate the distances to an ideal (or desired)

2.4

Aggregation of the Attributes: An Overview of Some Methods

27

object after some standardization. The best specimen is the one that is closer to the ideal (desired) object. However, it should be noted that the calculation of distances is multivariate. This is due to the choice of different distance metrics.

Distance Metric Selecting a metric to measure the remoteness of two n-dimensional objects x and y 1=p

n

Lp ðx, yÞ =

xj - yj

p

, 1 ≤ p ≤ 1,

ð2:33Þ

j=1

p-one of: p = 1—CityCab (Manhattan distance), p = 2—Square Root (Euclidean), p = Inf—Chebyshev: L1 ðx, yÞ = max xj - yj ,

ð2:34Þ

j

Reference Point (RP) Method [47] RP method allows you to determine the best alternative (strategy) based on the analysis of the matrix vij of deviations (regrets) of normalized values rij from the best indicator rj for each criterion: νij = ωj  r j - r ij , rj =

ð2:35Þ

max r ij , if j 2 Cþ j i

min r ij , if j 2 C j-

,

ð2:36Þ

i

Regret shows the value lost by making the wrong decision. The choice of the best alternative is possible in the following three options. (1) For each alternative (in each row), the maximum value of regret is found: Qi = max νij : j

ð2:37Þ

The efficiency indicator Qi can be interpreted as a degree of risk, and the method is an analog of Savage’s minimax risk criterion, in which the risk has a minimum value in the most unfavorable situation. The best alternative is the one with the lowest loss:

28

2

The MCDM Rank Model

min max νij : i

ð2:38Þ

j

(2) As a mirror strategy, you can use the maximum Wald criterion—a solution that guarantees the maximum gain in the worst environmental conditions: max min νij : i

ð2:39Þ

j

(3) The Hurwitz pessimism-optimism compromise criterion, in which the alternative is selected that has the largest value of the linear combination of the minimum and maximum payoffs: max β  min νij þ ð1 - βÞ  max νij i

j

j

,

ð2:40Þ

where β is the indicator of pessimism-optimism (most often 0.5). When normalizing, inversion of cost criteria values is not required. Three variations of the RP method lead to different rankings. The above strategies are used to solve problems in game theory (games with nature) or in risk theory. However, selecting alternatives based on best-of-worst or worst-of-best or a combination of the two is questionable for MCDM problems.

COmbinative Distance-based ASsessment (CODAS) The step-by-step procedure of the CODAS method is as follows [48]: 1. We form a negative vector (anti-ideal) of “weak” attributes for each criterion: r j = min r ij ; j = 1, . . . , n; i = 1, . . . , m, i

ð2:41Þ

where rij are normalized values. 2. For each alternative, we calculate the distances Ei and Ti in the Euclidean and City Block metrics to the anti-ideal: 1=2

n

Ei =

wj  r ij - r j

2

,

ð2:42Þ

j=1 n

Ti =

wj  j r ij - r j j ,

ð2:43Þ

j=1

3. We form a square matrix of dimensions [m × m] of the relative efficiencies of the ith and kth alternatives:

2.4

Aggregation of the Attributes: An Overview of Some Methods

H ik = ðE i - E k Þ þ ψ ½ðE i - E k Þ  ðT i - T k Þ,

i, k = 1, . . . , m,

29

ð2:44Þ

where the function ψ is defined as: ψ ð xÞ =

1, if j x j ≥ τ 0, if j x j < τ

,

ð2:45Þ

The preference parameter τ is recommended to be set as a value from 0.01 to 0.05. According to the authors of the method, the distance in the metric of city blocks provides additional information on the distinguishability of alternatives. Indeed, the Euclidean distance is always less than Ei ≤Ti. 4. We form the total value of relative efficiencies m

Qi =

H ik :

ð2:46Þ

k=1

The best alternative is the one with the highest Qi value.

Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) [1] To determine the performance indicator of the ith alternative Qi, a homogeneous function was used: Qi =

Sþ i

Si, þ Si-

ð2:47Þ

where þ vij = r ij  wj , Sþ , i = d vij , vj , Si = d vij , vj

ð2:48Þ

vþ j =

max vij if j 2 C þ j ; min vij if j 2 C j g,

ð2:49Þ

vj- =

min vij if j 2 C þ j ; max vij if j 2 C j g,

ð2:50Þ

i

i

i

i

Si+ and Si- were the distances d between the ideal and anti-ideal objects, respectively, whereas the alternative Ai in the n-dimension attributes space, which is defined in one of the Lp-metrics. The TOPSIS ranking result depends on the choice of distance metric.

30

2

The MCDM Rank Model

VIsekriterijumsko KOmpromisno Rangiranje (VIKOR) [40] Similar in structure to the TOPSIS method is the VIKOR method. At the first step of the VIKOR method, a matrix of deviations of the natural values of alternative attributes from the ideal and anti-ideal objects is formed. Formally, this procedure is the normalization of natural values of attributes using the Max-Min method (2.51) with the inversion of values for cost criteria using the iMax-Min method (2.52): r ij = r ij =

aij - amin j amax - amin j j - aij amax j amax - amin j j

,

ð2:51Þ

,

ð2:52Þ

To determine the performance indicator of the ith alternative Qi, a homogeneous function is used based on the strategy of individual maximum (R) and group (S) utility: (the strategies of maximal R and group utility S) Qi = β 

Si - S  Ri - R ,  þ ð1 - β Þ R - R S -S

ð2:53Þ

where vij = r ij  wj , 

ð2:54Þ -

Ri = max vij ; R = min Ri ; R = max R, j

n

Si = j=1

i

i

vij ; S = min Si ; S - = max Si : i

i

ð2:55Þ ð2:56Þ

The parameter β plays the role of a balancing factor between the total benefit (S) and the maximum individual deviation (R). Smaller values of β emphasize the strengthening of the group, while larger values increase the weight given by individual deviations. The result of the VIKOR procedure is three rating lists S, R, and Q. Alternatives are evaluated by sorting the values of S, R, and Q according to the criterion of the minimum value. As a compromise solution, alternative A1 is proposed, the efficiency indicator Q of which has the lowest value and if the following two conditions are met: (1) “acceptable advantage”: Q(A2) – Q(A1) ≥ 1/(m–1), where A2 is an alternative to the second position in the Q-rating list. (2) “acceptable decision stability”: alternative A1 should also be best scored on S or/and R.

2.4

Aggregation of the Attributes: An Overview of Some Methods

31

If one of the conditions (1) or (2) is not met, then a set of compromise solutions is proposed, which consists of: – alternatives A1 and A2, if condition (1) is true and condition (2) is false, or – alternatives A1, A2, . . ., Ak, if condition (1) is false, beeing k the position in the ranking of the alternative Ak verifying Q(Ak) – Q(A1) 0 0, x ≤ 0 1, x > q

;

; 0, x ≤ q x=p, x ≤ p ; f ðxÞ = 1, x > p 0, x ≤ p f ðxÞ =

0:5, p < x < q ; 1, x ≥ q

Linear

p and q threshold f ðxÞ =

Gaussian

s threshold

0, x ≤ p ðx - pÞ=ðq - pÞ, p < x < q ; 1, x ≥ q

f ðxÞ = 1 - exp -

x2 2s2

;

2.4

Aggregation of the Attributes: An Overview of Some Methods

33

dis = aij - asj ,

ð2:63Þ

H j = H j ðdis , p, q, Þ,

ð2:64Þ

n

V is =

wj  H j - ½m × m matrix:

ð2:65Þ

j=1

Step 3. Determine the preference factors: Φþ i =

m s = 1, s ≠ i

V is , Φi- =

m

V si ,

ð2:66Þ

s = 1, s ≠ i

Q i = Φþ i - Φi :

ð2:67Þ

The best alternative is the one with the highest Qi score.

Organisazion, RangEment ot SynTEze de donnecs relationnelles (ORESTE) [43, 44] Step 1. Transition from matrix DM to matrix of ranks (the columns of the matrix are replaced by their ranks) rij = rank aij j a1j , a2j , . . . , amj

, 8i, j ði = 1, . . . , m; j = 1, . . . , nÞ:

ð2:68Þ

Step 2. Determine ranks of criteria rcj = rank C j j fC1 , C2 , . . . , Cn g , 8j = 1, . . . , n,

ð2:69Þ

or rcj = rank wj j fw1 , w2 , . . . , wn g :

ð2:70Þ

Step 3. We compute the projections of ranks dij = ð1 - αÞ  r ij p - α  rcj p

1=p

, α 2 ð0; 1Þ:

p-one of: p = 1—the average rank (weighted arithmetic mean), p = 2—the quadratic mean rank, p = -1—the harmonic mean rank, p = -1—min(r, rc), p = 1—max(r, rc). Step 4. Calculating Ranks dij

ð2:71Þ

34

2

Rd ij = rank dij j dij

i = 1:m; j = 1:n

The MCDM Rank Model

:

ð2:72Þ

n

Ri =

Rdij :

ð2:73Þ

j=1

Step 5. Calculating Ranks Ri (ORESTE 1) Qi = OutRi = rankðRi jfR1 , R2 , . . . , Rm gÞ:

ð2:74Þ

Step 6. Calculation of preference factors Cik n

C ik =

1 Rd ij - Rd kj þjRd ij - Rdkj j :  2 2  n  ð m - 1Þ j = 1

ð2:75Þ

The best alternative is the one with the lowest Qi score.

2.4.4

Rank Reversal Problem

The choice of a set of alternatives and criteria is not formalized. However, it should be borne in mind that a number of multi-criteria methods such as AHP [51, 52], PROMETHEE [53, 54], TOPSIS [55] may suffer from the well-known rank reversal problem, where the ranking changes when a non-dominated alternative is added or removed, or when a non-discriminatory criterion is added or removed. Definition 1 [56]. Let A = {A1, A2, . . ., Am} be an alternative set, and the ranking after evaluation by some evaluation methods is A(1)  A(2)  . . .  A(m), where A(i), i = 1, 2, . . ., m, is an alternative in A. In the case of adding, deleting, or replacing alternatives under the original alternative set A, a new alternative set B = {B1, B2, . . ., Bk} is obtained. After ranking by the evaluation method, the ranking result is B(1)  B(2)  . . .  B(k), where B(i), i = 1, 2, . . ., k, is an alternative in B. For any two alternatives Bp, Bq 2 A\B, if there is no rank reversal, then the evaluation method is ranking stable; otherwise, the ranking is unstable. The choice of a set of alternatives and a set of criteria is an informal part of the MCDM rank model and is usually not explicitly included in (2.1). Any non-formalized procedures are referred to model design. In accordance with this, rank methods require analysis of the stability of the rank to the choice of a set of alternatives and criteria for the MCDM model. In the absence of criteria, the design decision is determined in many cases on the basis of comparative analysis or based on the experience and intuition of the researcher.

2.4

Aggregation of the Attributes: An Overview of Some Methods

2.4.5

35

Distinguishability of the Performance Indicator of Alternatives

To determine the priority of alternatives, it is not enough to compare the absolute values of the efficiency indicator Qi according to (2.1) or (2.2). The difference between two or more values in performance indicators may be insignificant, which determines the plurality of options for the final choice of the decision maker. The difference in performance between the two alternatives contains only partial information. In particular, small deviations may be due to random factors. For example, an attribute may be approximated, the source of the data may be unreliable, the measurement was made in error, the measurements for different alternatives were measured by different methods, some attributes may be random values or determined by interval values, etc. Estimates of alternative attribute values depend both on measurement scales and on the type of the variable. If these estimates are stochastic quantities, then sensitivity analysis of the solution to variations in these quantities will be required [57–59]. Thus, in fact, the value of the efficiency indicator is determined with an error of Qi ± ΔQi, and the distinguishability of alternatives is determined by the value of the error ΔQi. Therefore, it is advisable to define a relative indicator to assess the distinguishability of alternatives: dQp =

ðQp - Qpþ1 Þ  100%, p = 1, . . . , m - 1, Q1 - Qm

ð2:76Þ

where Qp is the value of the performance indicator corresponding to the pth rank alternative. The dQp is the relative (given in the Q scale) gain or loss of the performance score for an ordered list of alternatives. In many cases, the error cannot be estimated. Then use the “a priori” or expert assessment, expressed as a percentage. For example, as follows: the error in evaluating the performance indicator of an alternative is 5% of its value. Then two alternatives, the relative growth dQ of which differ less than the value of the given a priori error, should be considered indistinguishable. Another similar indicator is the intensity of the efficiency indicator of the pth alternative, we define it as follows: iQp =

Qp m i=1

 100%, p = 1, . . . , m:

ð2:77Þ

Qi

The iQ and dQ scores are used to evaluate the distinguishability of alternatives and to compare the results of aggregation performed by different methods. Aggregation of normalized attribute values in the framework of the rank model (2.1) transforms the original multi-criteria decision-making problem with different-

36

2

The MCDM Rank Model

sized and differently directed criteria to a one-dimensional problem of ranking alternatives in descending or ascending order of the integral performance index Qi.

2.5

Design of the MCDM Model

The above review of methods shows that all components of the MCDM rank model are multivariate. Thus, the design of the MCDM rank model (2.1) is determined by the choice of its basic elements and includes the choice of information processing methods. Within the chosen decision-making model, methods and results are not necessarily comparable. Inconsistencies may arise from differences in the formulation of the choice problem and differences in how preference information is processed when applying different methods. If the research is focused on the most popular methods for MCDM when identifying a problem (A, C, DM are defined), then the number of different available and consistent models is determined combinatory as the product of the number of methods and the number of possible parameters. Table 2.4 presents one possible design of decision models, determined by combining the F aggregation method, the weight estimation method, the “norm” normalization method, and choosing different distance metrics. The total number of models is 172, in the absence of criteria for the truth of options.

Table 2.4 Design of the MCDM model Aggregation methods F (1) SAW (2) COPRAS (3) TOPSIS

(4) GRA (5) PROMETHEE, II, Set the preference function for each criteria: (a) all V-shape (b) all Linear (c) all Gaussian (d) Linear-Gaussian (6) ORESTE-I (7) VIKOR Total

Weight estimation methods (1) No priority of criteria (2) Weighting by pairwise comparison (AHP) (3) Entropy-based method (EWM) (4) CRiteria Importance Through Intercriteria Correlation (CRITIC)

Norm methods (1) Max. (2) Sum. (3) Vec. (4) MaxMin. (5) dSum, (6) Z-score.

(1) Sum

Distance metric – – (1) L1 (2) L2 (3) Inf – –

(1) L2 (2) Inf –

Number of models 1 × 4 × 6 = 24 1 × 4 × 6 = 24 1 × 4 × 6 × 3 = 72 1 × 4 × 6 = 24 4 × 4 = 16

1×4×2=8 1×4×1=4 172

2.6

Conclusions

37

In particular, the work [60] presents an approach for choosing a consistent solution based on the results of the analysis of solutions based on 55 different versions of the model. Given that the number of different available methods for each of the model parameters is much larger, the number of options can be increased to 1000 or more. How much the ranking results differ within the studied models depends on many factors. First of all, the rating of alternatives is determined by the partial preference of various alternatives for individual attributes, i.e. is determined by specifying a decision matrix. In a situation of competing alternatives, when one of the alternatives has a preference over the other alternative according to several criteria, and vice versa, the other alternative dominates the first one according to another group of criteria, the performance indicators of these alternatives in some cases may differ slightly. In such a situation, variations in the choice of aggregation methods, the choice of criteria weighting method, the choice of decision matrix normalization method, and other model parameters can affect the final ranking. Accordingly, the consistency of the solution across different MCDM models increases the robustness of the solution. In the absence of formalized criteria for choosing an aggregation method, one approach recommends using selection based on multiple voting (the Borda count methods [61, 62]). Multiple voting determines the effective group of methods that provide rank 1 (or 1–2, or 1–2–3) the most number of times to the same alternative. This allows, for example, to exclude from consideration methods that are not consistent with the majority. In case of lack of consistency of results, the analysis of decisions based on the results of various models allows us to identify several best alternatives for the final decision by the decision maker.

2.6

Conclusions

Aggregation of normalized attribute values transforms the original multi-criteria decision-making problem with different-sized and differently directed criteria to a one-dimensional problem of ranking alternatives in descending or ascending integrated performance indicator. The formal structure of the rank model MCDM is given and an overview of the most popular methods for determining the weight coefficients of criteria, methods for aggregating private attributes within the framework of the rank model MCDM is presented. Given the multi-variance of methods and the absence of formalized criteria for their choice, the consistency of the solution for various MCDM models increases the reliability of the solution. The MCDM model must be designed in accordance with the goals set, meet the selection requirements, and take into account all the nuances of the subject area. The author is a supporter of complex analysis using various methods of aggregation, normalization, criteria weighting, and solution sensitivity analysis.

38

2

The MCDM Rank Model

References 1. Hwang, C. L., & Yoon, K. (1981). Multiple attributes decision making: Methods and applications. A state-of-the-art survey. Springer. 2. Triantaphyllou, E. (2000). Multi-criteria decision making methods: A comparative study. Springer. 3. Tzeng, G. H., & Huang, J. J. (2011). Multiple attribute decision making: Methods and application. Chapman and Hall/CRC. 4. Greco, S. (2005). Multiple criteria decision analysis: State of the art surveys. Springer. 5. Bobko, P., Roth, P. L., & Buster, M. A. (2007). The usefulness of unit weights in creating composite scores. A literature review, application to content validity, and meta-analysis. Organizational Research Methods, 10(4), 689–709. 6. Ginevicius, R., & Podvezko, V. (2005). Objective and subjective approaches to determining the criterion weight in multicriteria models. Proceedings of International Conference RelStat Transport and Telecommunication, 6(1), 133–137. 7. Odu, G. O. (2019). Weighting methods for multi-criteria decision making technique. Journal of Applied Sciences and Environmental Management, 23(8), 1449–1457. 8. Anokhin, A. M., Glotov, V. A., Pavel’ev, V. V., & Cherkashin, A. M. (1997). Methods for determination of criteria importance coefficients. Automation and Remote Control, 8, 3–35. 9. Podinovskii, V. V. (2004). The quantitative importance of criteria with discrete first-order metric scale. Automation and Remote Control, 65(8), 1348–1354. 10. Podinovskii, V. V. (2005). The quantitative importance of criteria with a continuous first-order metric scale. Automation and Remote Control, 66(9), 1478–1485. 11. Podinovski, V. V. (2009). On the use of importance information in MCDA problems with criteria measured on the first ordered metric scale. Journal of Multi-Criteria Decision Analysis, 15, 163–174. 12. Podinovskaya, O. V., & Podinovski, V. V. (2017). Criteria importance theory for multicriterial decision making problems with a hierarchical structure. European Journal of Operational Research, 258(3), 983–992. 13. Roberts, R., & Goodwin, P. (2002). Weight approximations in multi-attribute decision models. Journal of Multi-Criteria Decision Analysis, 11, 291–303. 14. Doyle, J. R., Green, R. H., & Bottomley, P. A. (1997). Judging relative importance: Direct rating and point allocation are not equivalent. Organizational Behavior and Human Decision Processes, 70, 55–72. 15. Barron, F. H., & Barrett, B. E. (1996). Decision quality using ranked attribute weights. Management Science, 42, 1515–1523. 16. Kirkwood, C. W., & Corner, J. L. (1993). The effectiveness of partial information about attribute weights for ranking alternatives in multiattribute decision making. Organizational Behavior and Human Decision Processes, 54, 456–476. 17. Stillwell, W. G., Seaver, D. A., & Edwards, W. (1981). A comparison of weight approximation techniques in multiattribute utility decision making. Organizational Behavior and Human Performance, 28, 62–77. 18. Pekelman, D., & Sen, S. K. (1974). Mathematical programming models for the determination of attribute weights. Management Science, 20, 1217–1229. 19. Shirland, L. E., Jesse, R. R., Thompson, R. L., & Iacovou, C. L. (2003). Determining attribute weights using mathematical programming. Omega, 31, 423–437. 20. Deng, M., Xu, W., & Yang, J. B. (2004). Estimating the attribute weights through evidential reasoning and mathematical programming. International Journal of Information Technlogy & Decision Making, 3, 419–428. 21. Saaty, T. L. (1980). The analytic hierarchy process. McGraw-Hill. 22. Rezaei, J. (2015). Best-worst multi-criteria decision-making method. Omega, 53, 49–57. https:// doi.org/10.1016/j.omega.2014.11.009

References

39

23. Kobryń, A. (2017). Dematel as a weighting method in multi-criteria decision analysis. Multiple Criteria Decision Making, 12, 153–167. 24. Kersuliene, V., Zavadskas, E. K., & Turskis, Z. (2010). Selection of rational dispute resolution method by applying new step – wise weight assessment ratio analysis (SWARA). Journal of Business Economics and Management, 11(2), 243–258. 25. Pamučar, D., Stević, Ž., & Sremac, S. (2018). A new model for determining weight coefficients of criteria in MCDM models: Full Consistency Method (FUCOM). Symmetry, 10(9), 393. 26. Wu, J., Sun, J., Liang, L., & Zha, Y. (2011). Determination of weights for ultimate cross efficiency using Shannon entropy. Expert Systems with Applications, 38(5), 5162–5165. 27. He, D., Xu, J., & Chen, X. (2016). Information-theoretic-entropy based weight aggregation method in multiple-attribute group decision-making. Entropy, 18(6), 171. 28. Diakoulaki, D., Mavrotas, G., & Papayannakis, L. (1995). Determining objective weights in multiple criteria problems: The CRITIC method. Computers & Operations Research, 22(7), 763–770. 29. Ma, J., Fan, Z. P., & Huang, L. H. (1999). A subjective and objective integrated approach to determine attribute weights. European Journal of Operational Research, 112, 397–404. 30. Xu, X. (2004). A note on the subjective and objective integrated approach to determine attribute weights. European Journal of Operational Research, 156, 530–532. 31. Ustinovičius, L. (2001). Determining integrated weights of attributes. Statyba, 7(4), 321–326. 32. Mukhametzyanov, I. Z. (2021). Specific character of objective methods for determining weights of criteria in MCDM problems: Entropy, CRITIC, SD. Decision Making Applications in Management and Engineering, 4(2), 76–105. https://doi.org/10.31181/dmame210402076i 33. Saaty, T. L. (1977). A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 15(3), 234–281. https://doi.org/10.1016/0022-2496(77)90033-5 34. Choo, E. U., & Wedley, W. C. (2004). A common framework for deriving preference values from pairwise comparison matrices. Computers & Operations Research, 31, 893–908. 35. Srdjevic, B. (2005). Combining different prioritization methods in the analytic hierarchy process synthesis. Computers & Operations Research, 32, 1897–1919. 36. Crawford, G., & Williams, C. (1985). A note on the analysis of subjective judgement matrices. Journal of Mathematical Psychology, 29, 387–405. 37. Lotfi, F. H., & Fallahnejad, R. (2010). Imprecise Shannon’s entropy and multi attribute decision making. Entropy, 12, 53–62. 38. Hafezalkotob, A., Hafezalkotob, A., Liao, H., & Herrera, F. (2019). An overview of MULTIMOORA for multi-criteria decision-making: Theory, developments, applications, and challenges. Information Fusion, 51, 145–177. 39. Chakraborty, S., & Zavadskas, E. K. (2014). Applications of WASPAS method as a multicriteria decision-making tool. Informatica, 25(1), 1–20. https://doi.org/10.15388/Informatica. 2014.01 40. Opricovic, S. (1998). Multicriteria optimization of civil engineering systems. PhD thesis, Faculty of Civil Engineering, Belgrade, 2(1), 5–21. 41. Brans, J. P., Mareschal, B., & Vincke, P. (1986). How to select and how to rank projects: The PROMETHEE method. European Journal of Operational Research, 24(2), 228–238. 42. Roy, B. (1968). Classement et choix en présence de points de vue multiples. RAIRO-Operations Research-Recherche Opérationnelle, 2, 57–75. 43. Pastijn, H., & Leysen, J. (1989). Constructing an outranking relation with ORESTE. Mathematical and Computer Modelling, 12, 1255–1268. 44. Wang, X. D., Gou, X. J., & Xu, Z. S. (2020). Assessment of traffic congestion with ORESTE method under double hierarchy hesitant fuzzy linguistic environment. Applied Soft Computing, 86, 105864. 45. Pamučar, D., & Ćirović, G. (2015). The selection of transport and handling resources in logistics centres using Multi-Attributive Border Approximation area Comparison (MABAC). Expert Systems with Applications, 42, 3016–3028.

40

2

The MCDM Rank Model

46. Ustinovichius, L., Zavadskas, E. K., & Podvezko, V. (2007). Application of a quantitative multiple criteria decision making (MCDM-1) approach to the analysis of investments in construction. Control and Cybernetics, 36(1), 251–268. 47. Brauers, W. K. M., Zavadskas, E. K., Turskis, Z., & Vilutiene, T. (2008). Multi-objective contractor’s ranking by applying the MOORA method. Journal of Business Economics and Management, 9(4), 245–255. 48. Ghorabaee, M. K., Zavadskas, E. K., Turskis, Z., & Antucheviciene, J. (2016). A new COmbinative Distance-based ASsessment (CODAS) method for multi criteria decisionmaking. Economic Computation & Economic Cybernetics Studies & Research, 50(3), 25–44. 49. Archana, M., & Sujatha, V. (2012). Application of fuzzy MOORA and GRA in multi-criterion decision making problems. International Journal of Computer Applications, 53(9), 46–50. 50. Wang, Q. B., & Peng, A. H. (2010). Developing MCDM approach based on GRA and TOPSIS. In Applied Mechanics and Materials (Vol. 34–35, pp. 1931–1935). Trans Tech Publications. https://doi.org/10.4028/www.scientific.net/amm.34-35.1931 51. Belton, V., & Gear, T. (1983). On a short-comming of Saaty’s method of analytic hierarchies. Omega, 11(3), 228–230. 52. Saaty, T. L., & Vargas, L. G. (1984). The legitimacy of rank reversal. Omega, 12(5), 513–516. 53. De Keyser, W., & Peeters, P. (1996). A note on the use of PROMETHHE multicriteria methods. European Journal of Operational Research, 89(3), 457–461. 54. Mareschal, B., De Smet, Y., & Nemery, P. (2008). Rank reversal in the PROMETHEE II method: Some new results. Proceedings of de IEEE 2008 International Conference on Industrial Engineering and Engineering Management, Singapore, pp. 959–963. 55. García-Cascales, M. S., & Lamata, M. T. (2012). On rank reversal and TOPSIS method. Mathematical and Computer Modelling, 56, 123–132. 56. Wang, Y. M., & Luo, Y. (2009). On rank reversal in decision analysis. Mathematical and Computer Modelling, 49(5–6), 1221–1229. 57. Barron, H., & Schmidt, C. P. (1988). Sensitivity analysis of additive multi-attribute value models. Operations Research, 36(1), 122–127. 58. Evans, J. R. (1984). Sensitivity analysis in decision theory. Decision Sciences, 1(15), 239–247. 59. Mukhametzyanov, I. Z., & Pamučar, D. (2018). Sensitivity analysis in MCDM problems: A statistical approach. Decision Making: Applications in Management and Engineering, 1(2), 51–80. https://doi.org/10.31181/dmame1802050m 60. Rezk, H., Mukhametzyanov, I. Z., Al-Dhaifallah, M., & Ziedan, H. A. (2021). Optimal selection of hybrid renewable energy system using multi-criteria decision-making algorithms. CMC-Computers Materials & Continua, 68, 2001–2027. https://doi.org/10.32604/cmc.2021. 015895 61. Lamboray, C. (2007). A comparison between the prudent order and the ranking obtained with Borda’s, Copeland’s, Slater’s and Kemeny’s rules. Mathematical Social Sciences, 54(1), 1–16. 62. Boyacı, A. Ç., & Tüzemen, M. Ç. (2022). Multi-criteria decision-making approaches for aircraft-material selection problem. International Journal of Materials and Product Technology, 64(1), 45–68.

Chapter 3

Normalization and MCDM Rank Model

Abstract The description of multidimensional normalization scales is carried out and the general principles of normalization of multidimensional data are formulated. This is the preservation of order and proportions between natural and normalized values on separate scales. General approaches of goal inversion for cost criteria are given. A description of anisotropic scaling in the transition to conditionally general normalized scales is given. The use of non-linear normalization to eliminate the asymmetry of the original data is discussed. Examples of weak and high sensitivity of the decision from the choice of the normalization method are shown, due to the priorities of alternatives according to individual criteria. Keywords Multivariate normalization · Additive significance of attributes · Asymmetry · The outlier detection · Goal inversion · Isotropy of scales

3.1

General Principles for Normalizing Multidimensional Data

Under multidimensional normalization, we define the procedure for bringing attribute values measured in different scales to a conditionally common scale: either within a given range, for example, [0, 1] or [-1, 1], or with some given property, for example, a standard deviation of 1 (standardization). Normalization is applied to numeric data. Therefore, data pre-processing involves converting non-numeric data to numeric data. The need to normalize data samples is due to the nature of the used algorithms for comparing data with several features. Being a procedure for pre-processing input information, the result of normalization determines the solution to the problem. The key goal of normalization is to bring various data in various units of measurement and ranges of values to a single form that will allow them to be compared with each other or used to calculate the similarity of objects. The task of comparing (ordering) a set of objects with a multidimensional set of features defines special requirements for the multidimensional normalization procedure. It is necessary that the normalization method ensures data comparability and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3_3

41

42

3 Normalization and MCDM Rank Model

eliminates the priority of individual criteria in the formation of the integral rating of the alternative, regardless of the weighting process. Data normalization applied to MCDM problems usually transforms the data into a unit interval [0, 1]. The choice of the range of values [0, 1] is due to a universal set intuitively understandable for a person with categories from 0 (bad) to 1 (excellent), used in the formation of preferences in the problems of choice or decision-making. Since some MCDM models are destroyed at a zero attribute value (WPM, WASPAS, COPRAS, etc.), sometimes a certain fixed range is chosen, a subrange of values of the set (0, 1]. Mapping in [0, 1] takes place only for the Max-Min normalization method. Thus, in MCDM problems, the initial set of natural values of the attributes of alternatives is mapped to the region [c, d] ⊂ [0, 1] by specifying various functions, both linear and non-linear. The normalization process scales the criteria values to approximately the same value, however, different normalization methods may produce different solutions or ranking results. Numerous examples of comparative analysis when combining the normalization method and the aggregation method presented in the literature [1–17] confirm this thesis. Most research concludes that the solution to the MCDM problem varies depending on the normalization method used. An attempt to attribute such a result to the normalization method was not successful. It is not possible to single out the best or worst normalization method for a particular aggregation method. As will be shown in our study, one of the significant reasons for the variation in ranking results depending on the normalization method is the shift in the domains of the normalized values of various attributes relative to each other. It is believed [3, 5, 7, 11, 15] that the normalization method is adequate if: • normalized values are independent of different units of attributes, • normalization preserves the relative dispositions the values of attributes of the alternative, • normalization equalizes the impact levels of all criteria regardless of the weighing process and does not cause problems with changing the rank of alternatives, • normalization provides symmetry in the orientation of cost and benefit attributes.

3.1.1

Preserving the Ordering Values of Attributes

The main requirement for choosing a transformation function is to preserve the original ordering of the data. This means that an ordered dataset retains the same ordering after transformation. Obviously, strictly monotonic functions, in particular, linear functions, have this property. Below is an illustration of normalization using linear and non-linear data transformation for benefit attributes (Fig. 3.1a) and cost attributes (Fig. 3.1b), mapping the set of natural values of attributes to a certain interval [c, d] ⊂ [0, 1] with the use of strictly monotonic functions.

3.1

General Principles for Normalizing Multidimensional Data

43

Fig. 3.1 Normalization based on linear and non-linear data transformation for benefit (a) and cost (b) criteria

However, this is true only for the benefit and cost criteria. If for some criteria the nominal value of the attribute is the best, then normalization is performed using a single-extremal function, as shown in Fig. 3.2. In this case, if the best solution to the problem (for all criteria) is the largest, then a function with a maximum is used (Fig. 3.2a), otherwise, a function with a minimum (Fig. 3.2b). For example, it can be a piecewise linear function or, for example, Harrington’s desirability functions [18]. The issue of preserving the relationship between successive ordered values after the “breakpoint” is not obvious, since the relationship of values between the ascending and descending range of values is violated. For more information on target attribute normalization, see Chap. 10.

44

3

Normalization and MCDM Rank Model

Fig. 3.2 Normalization based on piecewise linear and non-linear data transformation for target criteria for LTB (a) and STB (b) cases

3.1.2

Scale Invariance of Normalized Values of Attributes

One of the important requirements for normalization is to preserve the information content of the original data. Normalized values should preserve important information about the structure of the original data. The relative gap between the data for the same indicator should remain constant [7]. The normalization of the elements of the decision matrix must be carried out in such a way as to preserve the dispositions of the natural values of the attributes. This requirement is one of the basic principles of normalizing multidimensional data: Property 1 (P.1) Preservation of dispositions of natural and normalized values.

3.1

General Principles for Normalizing Multidimensional Data

45

In the notation of natural feature values (aij) and normalized feature values (rij), the property of preserving dispositions is expressed by the formula: aij - akj r ij - r kj = max , i, k = 1, . . . , m, 8j: min amax a r - r min j j j j

ð3:1Þ

Requirement P.1 is met with uniform data scaling, which corresponds to a linear transformation (Chap. 4). The result of uniform scaling is similar (in a geometrical sense) to the original. There is a scaling of images under a linear transformation. Therefore, linear normalization methods are preferred. However, the scaling of images takes place only by a separate attribute (coordinate). For multi-objective problems, if the normalization is performed on individual scales, linear methods produce anisotropic scaling when at least one of the scaling factors is different from the others. A feature of multidimensional normalization is that the scaling of natural values for different attributes is different. This entails a shift in the domains of normalized values relative to each other and may lead to a change in the final rating of alternatives. When using non-linear normalization, the relationship between attribute values for different alternatives changes compared to the original data. It seems that the use of non-linear methods is impractical. However, if most of the data is localized, then the error between linear and non-linear data transformations will be negligible. This means that the normalized values when using non-linear normalization will contain approximately the same information about the structure of the original data as when using linear normalization. It is also important that data localization be in the area of priority attribute values so as not to affect the priority of alternatives due to non-linear transformation. Therefore, despite some violation of mutual distances, the use of non-linear normalization procedures is considered acceptable in such cases.

3.1.3

Principle of Additive Significance of Attributes

Based on the normalized data, the attributes of the alternatives are aggregated and the performance index Qi of each alternative is calculated. The performance indicator of the alternative is formed as the contributions of the attributes of the alternatives for each of the criteria. Therefore, in the case of multidimensional normalization, one of the main tasks is to obtain, after normalization, commensurate values of the attributes of alternatives for all criteria in order to exclude the priority of individual criteria. The essence of the multi-criteria evaluation of alternatives can be clearly shown on the basis of a simple summation of the normalized values of the attributes of the alternative. For the SAW aggregation method, the criteria weight is only determined to set the priority. If weighting coefficients are not taken into account, then the

46

3 Normalization and MCDM Rank Model

alternative performance indicator is formed as a contribution from the attributes of alternatives for each of the criteria: n

Qi =

r ij ,

ð3:2Þ

j=1

where rij are the normalized values of the decision matrix, This method is sometimes referred to as the simple additive significance method (Hwang & Yoon, 1981) [19]. If for different criteria the area of normalized values is shifted relative to each other, the contribution of such criteria to the performance indicator of the ith alternative according to Eq. (3.2) will be different. This results in one or more criteria taking precedence over others before the criteria weights are determined. Therefore, when assigning the weight of the criteria, it is necessary to take into account the results of normalization. In another option, it is necessary to transform the scales for different criteria by scaling and shifting the normalized values in order to eliminate the priority of individual criteria: Property 2 (P.2) The principle of equality of contributions of various criteria to the performance indicator of alternative as an indicator of the effectiveness of alternatives. Given the content of the rows and columns of the decision matrix, this principle is called the principle of “horizontal” normalization. The fulfillment of this principle predetermines the alignment of the domains of normalized values according to all criteria. This principle underlies the methods proposed by the author for the transformation of normalized values, described in Chaps. 5–8.

3.1.4

Interpretation of Normalized Values of Attributes

A good example of interpreting normalized values is the linear normalization method Max (rij = aij/ajmax). Normalized values are interpreted as fractions of the attribute’s best value. In this case, the aggregation of such shares, for example, summation, seems to be adequate and correct. Is it real? Everyone knows that weight (kg) and cost ($) cannot be summed, and it may seem that aggregating (for example, a simple sum) of normalized attribute values is a perfectly correct operation. Even though when aggregating the normalized alternative attribute values, the source data is dimensionless, you still add “kilograms and currency units.” Why is this happening? This happens because the range of normalized values for each attribute is different and this is due to the fact that the

3.2

Linear Multivariate Normalization Methods

47

normalization parameters, such as compression ratio and offset, depend on the measurement scale and on the range of natural attribute values. As a result, the contribution of the attributes of different criteria to the performance indicator of alternatives will be different. So it’s possible that “kilograms” will dominate your result despite doing the normalization. Despite the possible negative consequences arising from the above example, the same interpretation of the normalized values is better than no interpretation. If the area of normalized values for all attributes is the same, then what do we summarize? Probably these are fractional parts of the sign?! Thus, the lack of the first of the multivariate normalization approaches is compensated by the second, and vice versa. Only one of the multidimensional normalization methods can be attributed to both the first and second approaches. This is the Max-Min normalization method. For Max-Min, the range of normalized values of all attributes is [0, 1], and the normalized values of all attributes are interpreted in the same way as fractions of the range, which, however, is not entirely clear. The same interpretation of normalized values, in particular, is a limiting factor, why different normalization methods are not applied to different attributes (given their independence). For example, why shouldn’t Max normalization be applied to one attribute and Max-Min normalization to another attribute if the attributes are independent? Answer: Because in this case it becomes impossible to compare or aggregate values that are different in meaning, despite the fact that they are dimensionless. For multivariate normalization procedures, it is impossible to simultaneously adjust the share of a feature of an individual attribute (compression and shift of normalized values) and the correspondence of different scales of different features. The problem is not solved in principle and only a compromise solution is possible.

3.2

Linear Multivariate Normalization Methods

The most common is the linear normalization procedure. Linear normalization is a combination of two operations—shifting values (shift) by aj* units and scaling (stretching/compressing) the natural values of attributes by kj times: r ij =

aij - aj , kj

ð3:3Þ

where aij, rij are the natural and normalized values of the jth attribute of the ith alternative, respectively, aj* and kj are some pre-assigned numbers, which we will call characteristic scales. Table 3.1 presents 6 main linear normalization methods most commonly used in multi-criteria choice problems [1–17, 19]:

48

3

Normalization and MCDM Rank Model

Table 3.1 Basic linear methods for the multidimensional normalization of the decision matrix Abba Max

r ij =

Sum

r ij =

Formula, f(x) aij max aij

=

i

r ij =

jaij j

r ij =

dSum

r ij = 1 -

i=1

Z-norm

r ij =

i=1

aij - aj sj

ð

amax j

- aij Þ

- kj amax j

aj =

(0; 1)

a2ij

amax - amin j j

amin j

amax - aij j m

(0; 1)

aij m

a2ij

aij - amin j amax - amin j j

Range of rij (0; 1]

m



m i=1

Compression, kj amax j

i=1

aij

Max-Min

Displacement, aj* – –

aij m i=1

Vec

aij amax j

1 m



kj = m

i=1

aij

sj2 =

m i=1 1 m



[0; 1] (0; 1]

ajmax - aij m i=1

aij - aj

2

[-c; d]

a The short name of the normalization methods is determined by the semantic value of the compression ratio k. The method abbreviation is also used as the name of a function that converts values in accordance with the normalization method. For example, rij = Max(aij) = aij/ajmax

The normalized value area is the domain of definition for the alternate attribute aggregation function. In what follows, to designate the normalized values of each attribute, we will use the term attribute domain, which is a point set on the interval [rjmin, rjmax]. All linear normalization methods are linear transformations of each other. However, differences in the stretch-compression ratios of different normalization methods and different domain shifts can lead to a change in the ranking. The characteristic scales are also individual for each jth attribute and are determined by the feature measurement scales, the selection of alternatives (objects), and the distribution of feature values. This may also be the reason for changes in the ranking of alternatives.

3.2.1

How Is the Shift Factor Determined?

When analyzing population tasks, most often the data is centered: determine the value of aj*, which will become the new 0 and shift the data relative to it. So when using standardization, the arithmetic mean is used. However, keep in mind that different types of distributions do not allow methods designed for the normal distribution to be applied to them. The standardization algorithm is generally optimal for a normal distribution. It is well recognized from statistics that the “typical representative” of the population is best shown not by the arithmetic mean, but by the median. The mean works well only in the case of a normal distribution and is the same as the median. In contrast to the mean value, the median is practically

3.2

Linear Multivariate Normalization Methods

49

Fig. 3.3 The displacement of domains of different criteria caused by the choice of a shift parameter aj*

insensitive to outliers and distribution skewness. Therefore, it is optimal to use it as a “zero” value when centering. Other offset values may be specified as some standard or expertly defined, as a desirable center, or as a characteristic value if known from the context of the problem. In the case when it is necessary not to center, but to fit into a given range, the offset is the minimum data value. The choice of the offset value during normalization is not such a “harmless” procedure. Since for various attributes the offset value is determined by the sample and depends on the measurement scales, the distribution of data in the sample, etc., the domains of various attributes are shifted relative to each other. Attributes whose values are closer to the largest value for the maximization problem will be given priority in aggregation based on additive methods, since they will contribute more to the cumulative indicator of the alternative. Similarly, in the case of minimizing the integral index. Figure 3.3 shows the normalized values for four bias options when normalizing two attributes (С3, С5) for 8 alternatives (see Table 2.1). This example uses a bias by the sample mean (a), median (b), set value (expert) (c), and a bias to the minimum value (d ). The scaling factors for each attribute in all instances of the example are equal and are defined as the mean square value of the sample. To determine the priority of two criteria in the simplest version, we use the sum of normalized values shifted to the minimum value of two samples: m

Sj =

r ij - c ,

ð3:4Þ

i=1

c = min min i r ip ; min r iq i

,

ð3:5Þ

where p and q are the numbers of the compared criteria. Offset c is used to eliminate compensation of positive and negative values during summation, for example, to

50

3 Normalization and MCDM Rank Model

normalize the Z-score. Formula (3.4) is different from (3.2)—summation is performed for all alternatives (index i) within one attribute. Of the two attributes, the one with all normalized values greater than the corresponding normalized values of the other criterion contributes more to the overall rating. The sum of all values is also higher. This situation has an unambiguous consequence. In fact, some of the values have reverse precedence. Therefore, the superiority condition is not a sufficient condition and can only serve as a rough estimate. If a large proportion of the values of one attribute exceeds the corresponding values of another attribute, then the sum of the values will be higher on average. To compare the contribution of the criteria, it is better to use the relative indicator δS to eliminate the effect of scaling: δS = Sp - Sq = min Sp ; Sq  100%,

ð3:6Þ

For an example in Fig. 3.3 p = 3, q = 5. Criterion C3 takes precedence over criterion C5 when the value of the sample median is chosen as the bias (Fig. 3.3b). The relative priority indicator is δS = 16.1%. For the case of choosing the expert value as a bias (Fig. 3.3c) and for the case of bias to the minimum value (Fig. 3.3d), on the contrary, criterion C5 has priority over criterion C3. Some superiority in the value of δS does not yet mean the determining contribution of the criterion in the ranking. This is due to conflicting criteria, when for one or more attributes the values are “high,” and for another criterion, on the contrary, the values can be “low.” However, with a significant value of δS, it is highly likely that there is a priority. And this will mean the dominance of the contribution of the criterion in the ranking. In the above example, the highest value of the relative priority indicator is δS = 115.4%, i.e. the sum of the values of the fifth attribute is more than twice the sum of the values of the third attribute. Conclusion: you need to change the normalization method so that the values of the attributes of the various criteria are consistent.

3.2.2

How Is Scaling Determined?

Obtaining dimensionless values for linear normalization is performed by dividing the dimensional values by the value of the same dimension, representing the characteristic value of the attribute of all alternatives. To do this, either the characteristic of the feature in various metrics is used: m

ð1Þ kj =

aij , i=1

L1 –City Block ðSumÞ,

ð3:7Þ

3.2

Linear Multivariate Normalization Methods

51

Fig. 3.4 The displacement of domains of different criteria caused by the choice of scaling factor kj

m

ð2Þ

kj =

L2 –Euclidean ðVecÞ,

ð3:8Þ

L1 –Chebyshev ðMaxÞ:

ð3:9Þ

a2ij , i=1

ð 3Þ

kj = amax , j

or the statistical characteristics of the sample are used: m

ð 1Þ

kj = sj , s2j = ð2Þ

1 2 a - aj ,  m i = 1 ij

sample standard deviation

ð3:10Þ

kj = rngðaj Þ = max ðaij Þ - min ðaij Þ = amax j

- amin j ,

ð3Þ k j = IQRj ,

i

i

the range of the jth attribute the interquartile range of the jth attribute

ð3:11Þ

ð3:12Þ

Interquartile range IQR is the difference between the 75th and 25th percentiles of the data, i.e. the interval that contains the “central” 50% of the data in the set. For all choices of k, scaling can lead to priority of individual features in the presence of skewness in the distribution of features, the presence of outliers in the data, and the presence of “fat” tails in the distribution. The distances between the normalized values for different attributes depend on the compression ratios. This leads to a situation where you have to aggregate numerically different values of different attributes. Figure 3.4 shows the normalized values when using the Max, Sum, Vec normalizations when normalizing two attributes. This example applies normalization methods without data bias (aj* = 0). As in the previous example, the same two criteria are considered with the same attribute values. To determine the priority of two criteria in the simplest version, we also use the sum of normalized values reduced to the minimum value of two samples (by Eq. (3.3)). For an example in Fig. 3.4 criterion С3 takes precedence over criterion

52

3 Normalization and MCDM Rank Model

С5 when Max is normalized (Fig. 3.3a). The relative priority indicator is δS = 9.8%. For the Sum normalization case (Fig. 3.4b) and for the Vec normalization case (Fig. 3.4c), on the contrary, the С5 criterion takes precedence over the С3 criterion. The value of the relative priority indicator is δS = 98.2% and 80.2%, respectively. A possible solution is to recommend the Max normalization method out of the three options, which has a lower value of the priority of one criterion over the other.

3.2.3

Disadvantages of Data Standardization

Disadvantages of the mean when applying standardization (aj* = mean(aij)) due to high sensitivity to outliers and distribution skewness were discussed above. This may be one factor in the priority of individual criteria. What are the downsides to scaling? For different normally distributed features with the same initial ranges but different variances, standardization results in the feature with lower variance being prioritized. The standard deviation does not meet the requirements for the same influence of features (the size of the interval) and the presence of outliers can significantly distort the “true” value of the standard deviation. In MCDM tasks, the choice is made by a limited selection of objects (alternatives). And this means that the true distribution of the feature cannot be estimated using statistical hypotheses. Even if the feature has a normal distribution, the estimate of the mathematical expectation through the mean value over a limited sample is not consistent. One of the options for standardization is the use of the interquartile range. The use of the interquartile range is robust to outliers and does not depend on the “normality” of the distribution of the presence/absence of asymmetry. But it also has its own serious drawback—if the distribution of a feature has a significant “tail,” then after normalization using the interquartile interval, it will add “significance” to this feature in comparison with the rest. The use of range is insensitive to distribution, but highly dependent on the presence of outliers in the data. In accordance with the above, under certain properties of a multivariate dataset, standardization based on the median and interquartile range (mIQR) can reasonably be used: r ij =

aij - mediani aij : IQRj

ð3:13Þ

Figure 3.5 shows the normalized values when using the Z-score, mIQR standardization normalizations, and the Max-Min normalization method that aligns the normalized value domains ([0, 1]) when two attributes are normalized. This example applies normalization methods with data offset (aj* ≠ 0). As in the previous examples, the same two criteria are considered with the same values of attribute.

3.3

Asymmetry in the Distribution of Features

53

Fig. 3.5 The displacement of domains of different criteria caused by the joint choice of the shift factor aj* and the scaling factor kj

To determine the priority of two criteria in the simplest version, we also use the sum of normalized values reduced to the minimum value of two samples (by Eq. (3.3)). A possible solution is to recommend the Max-Min normalization method from the three options, with the lack of priority of one criterion over the other. Note that the alignment of the domains of normalized values (Fig. 3.5c) does not exclude the appearance of a priority in the context of formula (3.4). Thus, there is no general solution, and in each specific situation, an analysis of the distribution of features is required based on a limited sample. The main linear methods are described in more detail in the fourth chapter.

3.3

Asymmetry in the Distribution of Features

As a rule, 5–15 alternatives with 5–20 features participate in MCDM tasks. With a large number of features, the problem of poor distinguishability of alternatives arises. In this case, the attribute grouping approach is applied, for example, into such groups as technical, economic, environmental, etc. The spread of attribute values is 5–15% of the average value. If the criterion is of a technical or technological nature, the spread of values is even smaller. This is due to the high competition in the characteristics of alternatives to man-made objects and the desire to constantly improve their characteristics through technological progress and innovation. For natural objects, the range of values may be higher. However, alternatives with clearly weak characteristics are usually excluded from the competitive list during the preliminary analysis. On the set of alternatives, each of the attributes is a random variable. For practical tasks, the set of random factors that determine the indicator is usually large, and there is no determining factor (such a factor is usually suppressed purposefully). Therefore, the set of attribute values:

54

3

Normalization and MCDM Rank Model

aj0 = a1j0 , a2j0 , . . . , amj0 , 8j0 ,

ð3:14Þ

can be defined as a normally distributed random variable. In other cases, for example, if aij represent formal technical measurements of an attribute, the set of attribute values can be defined as a uniformly distributed random variable. In real decision-making problems, the set of values of an individual attribute can obey a wide variety of distribution laws, sometimes very far from theoretical ones— normal or uniform. A significant number of distributions are not symmetrical and have a skew, sometimes significant, such as a family of gamma distributions. Since MCDM problems are considered on a discrete set of attributes, the skew may be due to the peculiarity of the choice of alternatives. For example, thanks to new technology, one of the objects in the sample may score very high on one or more criteria. Thus, one of the reasons for the asymmetry of the data is due to natural causes, the skew in the distribution of the feature. Another main reason for asymmetry is the manifestation of atypical feature values. Atypical values of features of various alternatives represent various deviations and in homogeneities in the sample associated with certain, generally unknown, reasons. Atypical feature values are, first of all, the presence of relatively rare outliers in the data, or the presence of anomalous and missing values. If there are relatively rare outliers in the original data that are much larger than the typical spread, it is these outliers that will determine the normalization scale. This will lead to the fact that the bulk of the values of the normalized variable will be concentrated near zero. As a result, the contribution of the attribute to the alternatives’ performance indicator will change when aggregating partial indicators, which may lead to a change in the ranking of alternatives. The presence in the samples of even a small number of outliers (outliers) can greatly affect the result of the study. For example, the method of least squares on specific distributions is subject to distortion, and the values obtained as a result of the study may cease to carry any meaning. In addition to directly “defective” observations, there may also be a number of observations that follow a different distribution. This is the reason that the sample may contain inhomogeneities in the form of a concentration of values near different points of the numerical axis—data clots and discrepancies with the ideal. Another reason for the asymmetry is due to the specifics of the sample. So in decision-making problems (and not only) the alternatives chosen for analysis represent an available set of alternatives, the attributes of which can take on values that do not reflect the entire set of possible alternatives. Therefore, the distribution of observed features may differ from the distribution of features of the entire population. In such a situation, the provisions of the sampling theory cannot be used. The different choices of available alternatives can cause the distribution to be skewed. Such datasets are also characterized by asymmetric distribution of values. For brevity, let’s define these different situations by the term “data skewness.” It is necessary to establish how strong the influence of “asymmetry” is on the result of

3.3

Asymmetry in the Distribution of Features

55

solving the problem, and should the “asymmetry” be eliminated? This question relates more to a specific task and is not formalized. In the absence of truth criteria, it is necessary to solve the ranking problem using the basic normalizations for the original data and for the transformed data and compare the results. If the results are different, then the issue of final decision-making remains the prerogative of the decision maker. To eliminate the influence of asymmetry, various approaches are used to reduce the influence of “bad” observations (outliers), or to completely exclude them. This process determines one of the important areas of statistics—the development of robust methods and robust estimates. The main task of robust methods is to distinguish a “bad” observation from a “good” one and to offer data processing methods that are resistant to atypical values. In decision-making problems, the amount of available data is not large, which limits the use of rigorous mathematical or statistical methods for screening. But even the simplest of approaches—subjective (based on the inner feelings of the researcher)—can bring significant benefits. If a preliminary analysis of the data shows the presence of anomalous (not typical) values, it is necessary to perform data processing. The stage of data pre-processing includes procedures for identifying “outliers,” filtering out anomalous and recovering missing values. In order to limit the influence of in homogeneities, or to eliminate it altogether, there are many different approaches. Among them, there are three main areas [20]: • data grouping without deleting individual observations (to reduce the possibility of sample damage by individual outliers). After that, with a sufficient degree of confidence, it is permissible to use the classical methods of statistics, • tracking emissions directly in the analysis process, for example, in the process of determining the parameters of the distribution law, • functional transformation of data, which is based on the hypothesis about the distribution of the feature. In the case of natural causes, the asymmetry can only be eliminated by applying a data transformation using a non-linear transformation, for example, performing a logarithm [21]. The use of linear transformations during normalization cannot eliminate the asymmetry. In the case of using a non-linear transformation, the distribution of normalized values within the domain changes in comparison with the distribution of natural values. This, in turn, leads to a change in the value of the alternative efficiency indicator when aggregating particular indicators and may lead to a change in the ranking of alternatives. The issues of eliminating asymmetry using non-linear normalization procedures are discussed in the eighth chapter. In the case of multivariate normalization, the distributions of attributes are independent and may differ significantly. This entails the need to use different transformations for different features. As a consequence, this entails the need to harmonize the various scales with each other. On the one hand, the use of multidimensional data transformation has a positive result, since it allows to partially eliminate the “asymmetry” in individual scales. On the other hand, there is a need to harmonize the various scales with each other. In

56

3 Normalization and MCDM Rank Model

both cases, the consequences in the form of the result of solving the choice problem may differ, in the absence of truth criteria.

3.3.1

Measures of Asymmetry

To determine the type of a non-linear data transformation function (decision matrix) in the class of strictly monotonic functions, we will use the criterion for minimizing the asymmetry index. To do this, we use three measures of the skewness of a discrete dataset—the sample skewness relative to the mean, the nonparametric skew, and the median pair. For each attribute j, the sample skewness of the random variable Xj is defined as [22]: μ3j = Skew xij = i

m3j X j m2j X j

3=2

ð3:15Þ

,

where mkj xij =

1 m

m

k

xij - xj , xj = i=1

1 m

m

xij :

ð3:16Þ

i=1

The skewness value can be positive, zero, negative, or undefined. For a unimodal distribution, a negative skew usually indicates that the “mass” is concentrated to the right of the mean, and the tail is on the left side of the distribution. A positive skew indicates that the tail is on the right. In cases where one tail is long and the other is thick, i.e. the “mass” to the left of the middle and to the right are the same, the skew does not follow a simple rule. Skewness of zero means that the tails on either side of the mean balance out as a whole. This is true for a symmetrical distribution, but can also be true for an asymmetrical distribution where one tail is long and thin and the other is short but thick. In addition to the skewness, as a measure of asymmetry, we will use nonparametric skew, defined as follows [22]: Sk j =

xj - mj ðxÞ , sj

ð3:17Þ

where x¯j is the mean, mj(x) is the median, and sj is the standard deviation of the jth feature have their usual values. The calculation of the nonparametric skew does not require knowledge of the shape of the underlying distribution. It has several desirable properties: it is zero for any symmetrical distribution; it is not affected by the scale shift; and it detects both

3.4

The Outlier Detection

57

left and right skew equally well. Absolute values Sk ≥ 0.2 indicate a noticeable asymmetry. We will also use the MedCouple (MC) introduced by Brys et al. [23], defined for each feature j of a one-dimensional sample {x1j, . . ., xmj} from a continuous unimodal distribution as follows: MC j =

med

xij ≤ mj ðxÞ ≤ xkj

hj xij , xkj , 8j = 1, . . . , n,

ð3:18Þ

where mj(x) is the sample median of the one-dimensional sample {x1j, . . ., xmj}, and for all xi ≠ xj the kernel function h is given by: hj ðxij , xkj Þ =

ðxkj - mj ðxÞÞ - ðmj ðxÞ - xij Þ , i, k = 1, . . . , m, 8j = 1, . . . , n, ðxkj - xij Þ

ð3:19Þ

All three measures are linear transformation invariants, equal to zero for a symmetric distribution of X, and are odd functions when the distribution is inverted.

3.4

The Outlier Detection

Outliers in the input are values that are far outside the range of other observations, or values that are very different from other values. Taking into account the significant impact of outliers during multidimensional data normalization on the final result, it is necessary to identify them at the stage of data pre-processing. The presence of an outlier increases the range of the feature, and following the main methods of normalization, increases the compression ratio, which leads to a decrease in the significant range of its values after normalization. Therefore, the influence of this trait has decreased compared to traits for which there are no outliers. An outlier can indicate anomalies in the distribution of the data or measurement errors, so outliers are often excluded from the dataset. Should outliers be removed or included in post-processing after some transformation? The cause of the outlier is the main factor influencing the decision to eliminate the outlier. Generally, outliers that occur due to error (in measurements, records, and so on) are excluded. On the other hand, outliers that are not related to errors but to new information or a trend are usually left in the dataset. The solution to the problem of the influence of outliers when using the range is to replace it with an interval in which “non-outliers” will be located, and then scale along this interval. For this, a measure of asymmetry with respect to the median was used. In contrast to the mean value, the median is practically insensitive to outliers and distribution skewness. Therefore, it is optimal to use it as a “zero” value when centering. The outlier identification technique is based on the interquartile method. Data that are more than 1.5 interquartile ranges (IQR) below the first quartile or above the third quartile are accepted as outliers:

58

3

Normalization and MCDM Rank Model

IQRj = Q3j - Q1j , 8j = 1, . . . , n,

ð3:20Þ

The interquartile range of IQR is the difference between the 75th and 25th percentiles of the data, i.e. the interval that contains the “central” 50% of the data in the set. In some cases (presence of long tails, large sample size) the 3∙IQR interval is used. A significant problem is that this method is symmetrical. The resulting “confidence interval” (1.5∙IQR) is the same for both small and large values of the attribute. If the distribution is not symmetrical, then many anomalies-outliers from the “short” side will simply be hidden by this interval. The calculation of the boundaries of the “confidence interval” taking into account the asymmetry of the distribution was proposed in [24]. The idea is to calculate the boundaries of the “confidence interval,” taking into account the asymmetry of the distribution, but so that for the symmetrical case it would be equal to the same 1.5∙IQR. The search for a suitable formula for determining the boundaries of the “confidence interval” was carried out in order to make the share attributable to emissions not exceed the same as that of the normal distribution and 1.5∙IQR—approximately 0.7%. Finally, the following result is obtained: When MC ≥ 0, all observations are outside the interval Q1 - 1, 5  e - 4MC  IQR, Q3 þ 1, 5 e3MC  IQR ,

ð3:21Þ

will be flagged as a potential outlier. For MC < 0, the interval becomes Q1 - 1, 5  e - 3MC  IQR, Q3 þ 1, 5 e4MC  IQR :

ð3:22Þ

For the symmetric case, according to formulas (3.21), (3.22), the interval is 1.5∙IQR. In the case of long tails (as, for example, with an exponential distribution), too much data falls into such “outliers”—sometimes reaching values of more than 7%, which requires the selective use of other coefficients, for example, ±3∙IQR. In MCDM problems, if the value is recognized as an outlier, then when deleting an observation, the alternative will also have to be excluded from the problem. Therefore, if the value is recognized as an outlier, but the observation (alternative) should not be removed, then a possible solution is to perform normalization on data that falls within the “confidence interval.” It is necessary to enter only values without outliers in the required range. This means that only “normal” data falls within the specified range, and outliers are not removed. Without “manual” adjustment, the following outlier treatment solutions are possible: (1) or the outliers are taken to the boundary—equate these values to the nearest boundary of the desired range,

3.4

The Outlier Detection

59

Fig. 3.6 Data normalization (Vec method) with outlier identification (IQRa-method) and their subsequent processing

(2) or outliers are taken out of the boundary when they are normalized with scale factors determined from truncated data (without outliers). In this case, the range of values will be higher than with normalization without identifying outliers. The proportions of the data are preserved. Figures 3.6, 3.7 show an example of test data normalization using the Vec and Max-Min methods with outlier identification and their subsequent processing. The data vector is given as X = (18, 42, 45, 45, 56, 58, 60, 89). According to the result of identification (Figs. 3.6 and 3.7) by the method described above, there are two observations in the data that “drop out” of the general trend—a potential “outlier” (blue dot). For linear normalization, “outlier” underestimates the values of alternatives for a given attribute. This may, in particular, cause other attributes to be prioritized in the overall contribution in the aggregation. Outlier value conversion is done in two different ways: IQRa(1) and IQRa(2). In the first variant, the values of outliers are taken to the boundary. At the same time, the values of other alternatives increase. The implicit “outlier pressure” on the values of the other alternatives has been “reset.” Asymmetry (μ3 and MS) decreased. However, the data has been changed and a justification for this method is required. The second option does not change the value of the outlier in relation to the remaining data—the linear transformation preserves the proportions. Data compression is reduced and the range of normalized values has increased. In particular, the mean and the median increased.

60

3

Normalization and MCDM Rank Model

Fig. 3.7 Data normalization (Max-Min-method) with outlier identification (IQRa-method) and their subsequent processing

Thus, the procedure for identifying and processing an outlier has a very definite useful result when comparing objects with mixed features.

3.5

Non-linear Normalization: General Principles

Non-linear normalization methods have a specific application and are used in cases where it is necessary to pre-process the initial data in case of significant asymmetry (skewness) or it becomes necessary to strengthen or weaken attribute values. The general approach to defining a non-linear transformation function, common for all attributes, is to use successive mappings: aij 2 Aj , Bj

Max‐Min

→ vij 2 ½0, 1 f ðvÞ → r ij 2 ½0, 1,

ð3:23Þ

or aij 2 Aj , Bj

Max‐Min

→ vij 2 ½0, 1 2cvþc → uij 2 ½ - c, c f ðvÞ → r ij 2 ½0, 1: ð3:24Þ

At the first step, a linear transformation is necessarily applied in order to preserve the proportions between the natural and normalized attribute values. In fact, linear normalization is performed. The transformation function f(x) is strictly monotonic

3.5

Non-linear Normalization: General Principles

61

Fig. 3.8 Non-linear transformation of normalized values using strictly monotonic functions on the interval [0, 1]

for STB and LTB problems and one-extremal for NTB problems. If the transformation function f(x) of the second step is the same for all criteria, then the domain of definition must also be the same. Therefore, it is advisable to use the Max-Min normalization method with the range of [0, 1]. Scheme (3.23) is applied when f(x) is set on [0, 1], for example, f(x) = x2 or f(x) = x0,5 (Fig. 3.8). In the above example, the function f(x) = x2 provides “weakening” of the data on the entire interval [0, 1]. Compression of data close to 0 is stronger than near 1. Data transformation using the function f(x) = x0,5, on the contrary, provides “amplification” of the data over the entire interval [0, 1]. Compression illustration in Fig. 3.8 is represented by a diagram in the form of a spring. Scheme (3.24) is used when f(x) is set on an interval symmetric about zero, for example, the logistic function or the error function (Fig. 3.9). The exponential function quickly converges to its saturation values of 0 and 1. Therefore, the functions shown in the example provide data compression close to 1 and 0, i.e. strengthen attribute values close to 1, and weaken values close to 0. Values at the ends of the interval [-c, c] can be redefined if necessary: f(c) = 0, f(c) = 1. Applying a symmetric function: 1-f(x) = f(-x) provides the same compression at the ends of the range of values. The degree of compression is determined by setting parameters of the function, such as the rate of logistic growth or the steepness of the curve. Compression illustration in Fig. 3.9 is represented by a diagram in the form of a spring compressed at the ends of the gap.

62

3

Normalization and MCDM Rank Model

Fig. 3.9 Non-linear transformation of normalized values using the error function and the logistic function on a symmetrical, relative to zero, interval

It should be noted that any strictly monotonic function can be used for a non-linear transformation, since the domain of definition and the range of values are easily scaled using a linear transformation. The main non-linear methods of normalization are described in detail in the ninth chapter.

3.6

Target Inversion in Multivariate Normalization

For a situation where different features have multidirectional goals (STB and LTB goals), their coordination is required. If in problems of vector (multi-criteria) optimization (MODM) it is enough to change the sign of the objective function [25], then in MCDM problems it is also necessary to invert the values. If the aggregation of particular features is performed by an additive method, then changing the values is not a safe operation, since this changes the integral rating of alternatives. MCDM traditionally uses the inversion of cost criteria values at the normalization stage by using a strictly decreasing function. Given that normalization precedes feature aggregation, order inversion is possible in three ways: 1. invert natural values of cost attributes and then normalize all attributes using the same algorithm, 2. normalize all attributes using the same algorithm and then invert the normalized cost attribute values,

3.7

Isotropy of Scales of Normalized Values

63

3. normalize the benefit attributes using a strictly increasing function and normalize the cost attributes using a strictly decreasing function (Fig. 3.1), which will change the ordering of the data. The first and second approaches are not equivalent, and in the third case it is necessary to correctly choose the decreasing transformation function so that the interpretation of the normalized values of the cost attributes corresponds to the interpretation of the normalized values of the profit attributes. For the situation where the nominal value is better (NTB), all three options are preserved, with the difference that in this case a single-extremal function with a maximum is used when the target is a larger value and a single-extremal function with a minimum when the target is a smaller value. For more information on inverting normalized values, see Chap. 5.

3.7

Isotropy of Scales of Normalized Values

A feature of multidimensional normalization is that features are measured in a wide variety of measurement scales: nominal, ordinal, and metric. Signs measured in metric scales have a wide variety of units, scale, reference points, and variation intervals. Therefore, depending on the method used, the normalized values may also have different reference points, variation intervals, and scale. Let’s define the attribute domain as the area of attribute values (natural or normalized) of all alternatives with respect to the jth criterion. The location of domains in the interval [0, 1] for different normalization methods can vary significantly. The density of values in a domain is also different for different methods and different attributes. Two main approaches are used when choosing scales for normalized values. The first approach is to bring the data to one common scale. This means that the range of normalized values [c, d] is the same for all attributes. In this case, the normalizations are “isotropic”—the coverage area of the multidimensional cloud of normalized values is an m-dimensional cube. However, the disadvantage of this approach is the lack of a meaningful interpretation of the normalized values of various attributes. The reduction of data to one common scale is presented in more detail in the seventh chapter. The second approach is to reduce the data to a conditionally general scale. This means that the area of normalized values [cj, dj] for different attributes is different in value, but the scales are interpreted in the same way. The choice is determined by the normalization method and depends on the range of data changes and the distribution of values in the domain. In this case, the normalizations are not “isotropic,” that is, they compress the data cloud more strongly in some directions and less in others. However, despite some violation of the data structure (mutual distances), this approach is considered generally accepted. Figure 3.10 shows anisotropic normalization by the Max, Sum, Vec, dSum, MS(Max) method and isotropic normalization

64

3

Normalization and MCDM Rank Model

Fig. 3.10 Isotropic (Max-Min) and anisotropic normalization for a problem with three criteria

by the Max-Min method for a three-dimensional problem (data from Table 2.1, second, fourth, and fifth criteria). Although the differences in the range of normalized values may not be significant, in some cases the subsequent result of processing the normalized data may change significantly (solution sensitivity to the normalization method). In the above example, for the methods Max, Sum, Vec, the alternatives of the first and second ranks (WSM, TOPSIS, WASPAS methods, equal criteria weights) have numbers 7, 3, and for the normalization methods dSum, Max-Min and MS(Max)—8 and 6, respectively. In the case of isotropic normalization, the area of normalized values is the same, which makes it possible to eliminate the priority of individual features when aggregating normalized values.

3.8

Impact of the Choice of Normalization Method on the Rating

In accordance with the multi-criteria decision-making model (2.1), normalization explicitly determines the rating of alternatives. The issue of how sensitive the final ratings are to the choice of normalization methods has been discussed in many

3.8

Impact of the Choice of Normalization Method on the Rating

65

studies [1–13]. However, this issue has not yet been finally resolved. This is due to the complexity of the task, since, according to the rank model (2.1), the rating is determined by the decision matrix and the design of the decision-making model— the choice of the aggregation method, the method of estimating the criteria weights, the normalization method, and other model parameters. This influence is complex and for different tasks the influence of design on the solution is different. The first example (Fig. 3.11) demonstrates that the rating is independent of the choice of normalization method. The heading of each variant (subplot) includes the normalization method and the numbers of alternatives of I, II, and III ranks, respectively. For this example, a decision matrix has been generated with a range similar to that of the decision matrix in Table 2.1. The relative difference in performance indicators for all normalization methods is high, which is sufficient for distinguishing alternatives (Table 3.2). The following example demonstrates the strong dependence of the rating on the choice of normalization method. If several alternatives have a part of the attributes “strong” and approximately the same part is “weak,” then the performance indicators of such alternatives will differ slightly, and the alternatives are hardly distinguishable. Under such conditions, the effect of the influence of the normalization method on the result of attribute aggregation and ranking of alternatives is clearly manifested. Under such conditions, the sensitivity of the solution to an error in estimating the initial values of the attributes is observed. Figure 3.12 is an illustration of the relative position of the domains of normalized values of specially generated decision matrices on a computer (with a range of values similar to that of the decision matrix in Table 3.1). For this example, all alternatives of the 1st rank are different for the five basic linear normalization methods. Figure 3.12 shows a situation in which some of the attributes are “strong” and approximately the same part is “weak.” The performance indicators of such alternatives will differ slightly, and therefore the alternatives are hardly distinguishable. For the presented example (SAW), the values of the performance indicators are presented in Table 3.3. The relative difference in performance indicators for some normalization methods does not exceed 1–3%. In such a situation, ranking alternatives by absolute value is questionable. Similar results take place in the case of using other WPM, TOPSIS, GRA, etc. methods for aggregation. Given the limited set of methods, it is not difficult to perform decision analysis for various normalization methods. If the results remain unchanged, they are called reliable, otherwise they are sensitive. In the latter case, a sensitivity analysis is required and/or several alternatives for the final rating to be proposed. A detailed sensitivity analysis is presented in Chaps. 12–14.

66

3

Normalization and MCDM Rank Model

Fig. 3.11 An illustration of the mutual arrangement of domains of normalized values and local priorities of alternatives of I–III ranks. A decision matrix for which the I-rank alternatives are the same for the 5 basic linear normalization methods. SAW method of aggregation Table 3.2 Values of indicators of efficiency of alternatives of I–III ranks Normalization Method Max Sum Vec Max Min dSum

# Rank 123 782 782 782 758 758

SAW method of aggregation

Performance indicators of alternatives of I III ranks Q1 Q2 Q3 0.9431 0.9266 0.9126 0.1415 0.1390 0.1365 0.3963 0.3893 0.3822 0.7404 0.6751 0.6620 0.9260 0.9237 0.9073

Relative change, % dQ1 dQ2 dQ3 13.53 11.44 4.83 11.33 11.70 5.30 11.51 11.68 5.35 21.64 4.33 8.83 3.17 23.57 1.53

3.8

Impact of the Choice of Normalization Method on the Rating

67

Fig. 3.12 Decision matrix for which the I-rank alternatives are different for the 5 basic linear normalization methods. An illustration of the mutual arrangement of domains of normalized values and local priorities of alternatives of I–III ranks. SAW method of aggregation Table 3.3 Values of indicators of efficiency of alternatives of I–III ranks Normalization method Max Sum Vec Max Min dSum

# Rank 123 162 261 621 315 513

SAW method of aggregation

Performance indicator of alternatives of I III ranks Q1 Q2 Q3 0.9030 0.9004 0.8972 0.1348 0.1348 0.1344 0.3754 0.3752 0.3746 0.6110 0.6057 0.5981 0.9084 0.9026 0.9006

Relative change, % dQ1 dQ2 dQ3 3.86 3.46 1.11 0.35 3.21 3.18 0.35 1.27 3.92 3.30 4.66 13.59 13.08 4.48 10.08

68

3.9

3

Normalization and MCDM Rank Model

Conclusions

In the vast majority of studies on multivariate analysis, the features of different types of data normalization and the reasons for their use are either not considered at all, or they are mentioned only in passing and without disclosing the essence. There is a “blind” use of individual normalization methods. Upon closer examination, it turns out that some signs were unconsciously placed in a privileged position and began to influence the result much more strongly. The basic principles of multivariate data normalization presented in this chapter and the problems that arise when using individual methods are a guide to the analysis and application of adequate normalization methods.

References 1. Zavadskas, E. K., Ustinovichius, L., Turskis, Z., Peldschus, F., & Messing, D. (2002). LEVI 3.0 – Multiple criteria evaluation program for construction solutions. Journal of Civil Engineering and Management, 8(3), 184–191. 2. Milani, A. S., Shanian, R., Madoliat, R., & Nemes, J. A. (2005). The effect of normalization norms in multiple attribute decision making models: a case study in gear material selection. Structural and multidisciplinary. Optimization, 29(4), 312–318. 3. Peldschus, F. (2007). The effectiveness of assessment in multiple criteria decisions. International Journal of Management and Decision Making, 8(5–6), 519–526. 4. Migilinskas, D., & Ustinovichius, L. (2007). Normalisation in the selection of construction alternatives. International Journal of Management and Decision Making, 8(5–6), 623–639. 5. Zavadskas, E. K., Kaklauskas, A., Turskis, Z., & Tamošaitien, J. (2008). Selection of the effective dwelling house walls by applying attributes values determined at intervals. Journal of Civil Engineering and Management, 14, 85–93. 6. Ginevičius, R. (2008). Normalization of quantities of various dimensions. Journal of Business Economics and Management, 9(1), 79–86. 7. Liping, Y., Yuntao, P., & Yishan, W. (2009). Research on data normalization methods in multiattribute evaluation. Proc. International Conference on Computational Intelligence and Software Engineering, Wuhan, China, 2009, 1–5. 8. Chakraborty, S., & Yeh, C. H. (2009). A Simulation Comparison of Normalization Procedures for TOPSIS, Proc. of CIE 2009 International Conference on Computers and Industrial Engineering, Troyes (2009), 1815–1820. 9. Stanujkič, D., Đordevič, B., & Đordevič, M. (2013). Comparative analysis of some prominent MCDM methods: A Case of Ranking Serbian Bank. Serbian Journal of Management, 8(2), 213–241. 10. Chatterjee, P., & Chakraborty, S. (2014). Investigating the effect of normalization norms in flexible manufacturing system selection using multi-criteria decision-making method. Journal of Engineering Science and Technology Review, 7(3), 141–150. 11. Çelen, A. (2014). Comparative analysis of normalization procedures in TOPSIS method: With an application to turkish deposit banking market. Informatica, 25(2), 185–208. 12. Aouadni, S., Rebai, A., & Turskis, Z. (2017). The Meaningful Mixed Data TOPSIS (TOPSISMMD) method and its application in supplier selection. Studies in Informatics and Control, 26(3), 353–363. https://doi.org/10.24846/v26i3y201711

References

69

13. Vafaei, N., Ribeiro, R. A., & Camarinha-Matos, L. M. (2018). Data normalization techniques in decision making: Case study with TOPSIS method. International Journal of Information and Decision Sciences, 10(1), 19–38. 14. Jahan, A., & Edwards, K. L. (2015). A state-of-the-art survey on the influence of normalization techniques in ranking: Improving the materials selection process in engineering design. Materials & Design, 65, 335–342. 15. Sałabun, W., Wątróbski, J., & Shekhovtsov, A. (2020). Are MCDA methods benchmarkable? A comparative study of TOPSIS, VIKOR, COPRAS, and PROMETHEE II methods. Symmetry, 12, 1549. https://doi.org/10.3390/sym12091549 16. Zeng, Q. L., Li, D. D., & Yang, Y. B. (2013). VIKOR method with enhanced accuracy for multiple criteria decision making in healthcare management. Journal of Medical Systems, 37, 1–9. 17. Aytekin, A. (2021). Comparative analysis of normalization techniques in the context of MCDM problems. Decision Making: Applications in Management and Engineering, 4(2), 1–25. https:// doi.org/10.31181/dmame210402001a 18. Harrington, J. (1965). The desirability function. Industrial Quality Control, 21(10), 494–498. 19. Hwang, C. L., & Yoon, K. (1981). Multiple attributes decision making: Methods and applications. A state-of-the-art survey. Springer. 20. Maronna, R. A., Martin, R. D., & Yohai, V. J. (2006). Robust statistics: Theory and methods (2nd ed.). Wiley. 21. Zavadskas, E. K., & Turskis, Z. (2008). A new logarithmic normalization method in games theory. Informatica, 19(2), 303–314. 22. Skewness. (2022, May 28). In Wikipedia. https://en.wikipedia.org/wiki/Skewness 23. Brys, G., Hubert, M., & Rousseeuw, P. J. (2005). A robustification of independent component analysis. Journal of Chemometrics, 19, 364–375. 24. Hubert, M., & Vandervieren, E. (2008). An adjusted boxplot for skewed distributions. Computational Statistics & Data Analysis, 52(12), 5186–5201. 25. Hwang, C. L., & Masud, A. S. M. (1979). Multiple objective decision making methods and applications, a state of the art survey. Lecture notes in economics and mathematical systems. Springer.

Chapter 4

Linear Methods for Multivariate Normalization

Abstract This chapter introduces the basic linear methods of multivariate normalization and defines some important invariant properties of linear normalization methods—disposition invariance of natural and normalized values, rating invariance for isotropic data scaling, re-normalization invariance, and skewness invariance. A meaningful interpretation of linear normalized scales is given. Using the invariant properties of linear normalization methods often eliminates simple problems and avoids obvious errors when solving MCDM problems. Keywords Multivariate normalization · Linear methods · Re-normalization · Invariant properties · Interpretation of normalization scales

4.1

Basic Linear Methods for Multivariate Normalization

Linear normalization is a combination of two operations—shifting values (shift) by aj* units and compressing the natural values of attributes by kj times: r ij =

aij - aj , kj

ð4:1Þ

where aij, rij are the natural and normalized values of the jth attribute of the ith alternative, respectively, aj* and kj are some pre-assigned numbers, which we will call characteristic scales. In the case aj* = 0, the transformation (4.1) is a linear transformation without a shift, otherwise with a shift. If kj > 1, the natural values of the attributes are compressed by kj times, if 0 < kj < 1, the natural values of the attributes are stretched by 1/kj times. The stretch-compression operation during normalization leads to a change in the measurement scale, and the offset determines the shift of the attribute measurement scale to a new reference point (scale relativity). The successive execution of compression and displacement operations leads to the loss of information about the initial data by one or two degrees of freedom, respectively (in order to

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3_4

71

72

4 Linear Methods for Multivariate Normalization

restore the original data from the normalized data, it is necessary to know the compression factor and the offset value). Table 4.1 presents 8 main linear normalization methods most commonly used in multi-criteria selection problems [1–11]: The normalized area is the domain of definition for aggregation function of the attributes. In what follows, to designate the normalized values of each attribute, we will use the term attribute domain, which is a point set on the segment [rjmin; rjmax]. Figures 4.1 and 4.2 are an illustration of the linear normalization of one of the attributes for various linear normalization methods. The example uses the values of the second attribute for the decision matrix presented in Table 4.2. Decision matrix, dimensions [8 × 5]—8 alternatives and 5 criteria. Each alternative is defined by a set of 5 attributes in the context of the selected criteria. The third and fifth feature is the cost attribute. This means that smaller characteristic values are preferred when choosing an alternative. Location of domains on the interval [0, 1] and the density of values in the domain for different normalization methods can vary significantly. Despite this, linear normalization methods are linear transformations of each other. This is achieved by applying data shift operations to fixed point 0, scaling and then shifting to a new reference point. Let rij(1) and rij(2) be two different sets of normalized values obtained by different linear methods: ð1Þ

rij = ð2Þ

rij =

aij - aj ð1Þ ð1Þ

,

ð4:10Þ

,

ð4:11Þ

kj

aij - aj ð2Þ ð2Þ

kj

where aj*(1), kj(1) and aj*(2), kj(2) are characteristic scales of the first and second transformations (constants different for different attributes). Then, the linear transformation of the normalized values of rij(1) to rij(2) will be as follows: ð2Þ

r ij =

ð1Þ

ð1Þ

kj  r ij þ aj ð1Þ - aj ð2Þ ð2Þ

kj

ð1Þ

=

kj

ð1Þ

 r ij þ ð2Þ

kj

aj ð1Þ - aj ð2Þ ð2Þ

kj

ð1Þ

= k j  r ij þ bj : ð4:12Þ

All methods of linear normalization according to (4.12) are linear transformations of each other. However, in MCDM problems we deal with several criteria, and each criterion has its own measurement scale. Therefore, for multiple criteria, it is not possible to convert from one normalization method to another using the same transformation.

r ij =

- aij

 i=1

aij

mdj = mediani(aij)

1 m

m

aij - mdj sj

aj =

- kj amax j

mdj = mediani(aij)

- aij Þ ðamax j

amin j



– m

1 m

i=1

m



sj =

1 m



m

i=1

m

2

aij - md j

aij - aj

amax - aij j

i=1

kj = IQRj

sj =

kj =

k j = amax - amin j j

a2ij

aij

i=1

i=1

m

amax j



aij - mdj IQRj

aij - aj sj

i=1

m

amax j

aij amax j

Compression, kj

Displacement, aj*

2

0:5

0:5

(-c; d]

(-c; d]

(-c; d]

(0; 1]

[0; 1]

(0; 1)

(0; 1)

(0; 1]

Range of rij

(4.9)

(4.8)

(4.7)

(4.6)

(4.5)

(4.4)

(4.3)

(4.2)

#

The abbreviation of the normalization method is defined by the semantic value of the compression factor. The same abbreviation is used as the name of a function that transforms natural values according to the normalization formula. For example, rij = Max(aij) = aij/ajmax b The dSum normalization method [10] is an example of a multi-step procedure that is implemented by a combination of the Max-Min, Sum normalization methods, and double inversion (Inv) of normalized values (see Chap. 5) c IQR—interquartile range The short name of the normalization methods is determined by the semantic value of the compression ratio k. The method abbreviation is also used as the name of a function that converts values in accordance with the normalization method. For example, rij = Max(aij) = aij/ajmax d mMAD—Median Absolute Deviation

a

mMAD

r ij =

mIQRc

d

r ij =

a2ij

aij - amin j amax - amin j j

i=1

m

aij

r ij = 1 -

r ij =

ij

jaij j

aij

i=1

m

≠ 0)

Z-norm

dSumb

With displacement Max-Min

r ij =

Vec

(aj*

r ij =

Sum

i

Formula, f(x) Abba No displacement (aj* = 0) a Max r ij = maxij a =

Table 4.1 Basic linear methods for the multidimensional normalization of the decision matrix

4.1 Basic Linear Methods for Multivariate Normalization 73

74

4

Linear Methods for Multivariate Normalization

Fig. 4.1 Domains of normalized values for various linear normalization methods without displacement (aj* = 0). Initial data according to Table 4.2, second attribute

The result of uniform scaling is similar (in a geometrical sense) to the original. However, this is only true in the one-dimensional case. The stretch-compression coefficients for different attributes are different, and the displacement coefficients are also different. Therefore, in the multidimensional case, non-uniform (anisotropic) scaling occurs when at least one of the scaling factors differs from the others. A special case is directional scaling or stretching (in one direction). The consequence of non-uniform scaling is the displacement of the domains of different attributes relative to each other. Figure 4.3 shows the normalized values and the relative position of the domains of five different attributes relative to each other for the six main linear normalization methods. Decision matrix (initial data) according to Table. 4.2. For the cost criteria, the normalized values were inverted using the reverse sorting algorithm detailed in Chap. 5. For the Max normalization method, the “upper” values of all attributes are the same and equal to 1. The “lower” values for various attributes differ in some cases by more than 2 times, for example, for the second and fifth attributes in Fig. 4.3. A similar strong bias and a significant difference in the range of domains of various attributes also takes place for the Sum and Vec normalization methods. There is no domain offset for the Max-Min normalization method. However, at least one of the values for each attribute is 0. This means that for some alternative, there will be no contribution to the integral indicator for this attribute. This leads to the loss of the contributions of “weak” attributes to the overall performance of the alternative. A simple example illustrates such a problem: ai1 = f6, 5, 5, 4, 4, 4g

Max‐Min

! r i1 = f1, 0:5, 0:5, 0, 0, 0g:

After normalization by the Max-Min method, the contribution to the efficiency indicator of the first alternative for the first criterion is 1, for the second and third— 0.5, and for the remaining criteria—0 (no contribution). While the natural values of the attributes of alternatives differ by an average of 20%, the principle of preserving the proportions of contributions (the principle of “horizontal” normalization) has not

4.1

Basic Linear Methods for Multivariate Normalization

75

Fig. 4.2 Domains of normalized values for various linear normalization methods with displacement (aj* ≠ 0). Initial data according to Table 4.2, second attribute Table 4.2 Decision matrix D0 [8 × 5] benefit(+)/cost(-) Alternatives A1 A2 A3 A4 A5 A6 A7 A8

Criteria C1 + 6500 5800 4500 5600 4200 5900 4500 6000

C2 + 85 83 71 76 74 80 71 81

C3 667 564 478 620 448 610 478 580

C4 + 140 145 150 135 160 163 150 178

C5 1750 2680 1056 1230 1480 1650 1056 2065

76

4

Linear Methods for Multivariate Normalization

Fig. 4.3 Normalized values and relative position of domains of five different attributes relative to each other for basic linear normalization methods. Initial data according to Table 4.2

been fulfilled. The situation when the values for all attributes, except for one, are the same looks like this: Max‐Min

ai1 = f5, 4, 4, 4, 4, 4g

! r i1 = f1, 0, 0, 0, 0, 0g

Max

! r i1 = f1, 0:8, 0:8, 0:8, 0:8, 0:8g,

which determines the significant priority of the first alternative for Max-Min method of normalization, in contrast to, for example, Max-method of normalization. Handling zero and negative values is one of the necessary requirements for a normalization method for a number of aggregation methods. For example, for such aggregation methods as WPM, WAPRAS, COPRAS, the input data area is the interval (0, 1]. Therefore, the Max-Min normalization method for these methods is excluded. For the dSum normalization method, the “upper” values of all attributes are the same and equal to 1, as well as for the Max normalization method. However, the difference in “lower” values for various attributes is not as significant as for Max. In this context, dSum is more efficient than the Max-method (at least in this example). The scope and position of the domain of the fifth attribute is indicative, in contrast to the Max, Sum, and Vec normalization methods. If for the specified methods the range of the domain of the fifth attribute is the largest and differs significantly, then for the dSum normalization method the range of the domain of the fifth attribute is comparable (and even less) to the range of the domains of other attributes. For the Z-score normalization method, domains are aligned on average and the variances of various attributes are the same. However, the Z-score produces negative values, which is illegal for some attribute aggregation methods (such as the WPM model). Handling negative values is one of the necessary requirements for a normalization method. In Fig. 4.3, the results of normalization for the Z-score method

4.1

Basic Linear Methods for Multivariate Normalization

77

are given both in the scale [0, 1] and in their own measurement scale, which corresponds to a linear transformation of the scales. The solution to this problem is presented below in Sect. 4.7 using the transformation of normalized values. Z-score normalization and standard Z-scores should not be confused. In multicriteria decision problems, the population mean and the population standard deviation are unknown. The standard score can be calculated using the sample mean and sample standard deviation as estimates of population values. However, under conditions when the distribution law of the attribute is unknown, and also with a small number of alternatives available for observation, the information content of Z-scores is relative. Comparing and aggregating the Z-scores of two or more alternatives should be done with careful additional justification for such an operation. For the mIQR normalization method, the domains are biased toward the median of the attribute. mIQR normalization, like Z-score, produces negative values. Similar to Z-score, normalized values can easily be transformed into the interval [0, 1]. In Fig. 4.3, the results of normalization for the mIQR method are given both in the scale [0, 1] and in their own measurement scale, which corresponds to a linear transformation of the scales. The peculiarity of mIQR normalization is the displacement of the median when inverting the target or inverting the data. In Fig. 4.3 the median values of each attribute are shown in the graph. In the case of profit criteria, the median values are 0, while for cost criteria these values are displacement (the third and fifth criteria in the example). The above analysis, based on a graphical representation of the normalized values (Fig. 4.3) for all attributes for various normalization methods, allows us to estimate the domain displacement and preliminarily evaluate the priority of individual criteria. Figures 4.4, 4.5, 4.6 present similar illustrations for the three applied problems of multi-criteria choice. Visually, those criteria have priority, the normalized values for which are located higher and with greater density at the top point, for example, for the Max normalization method, these are criteria 2, 6, and 10 (Fig. 4.4). A generalization of the above main linear methods of normalization is a linear transformation with an arbitrary choice of range and location of the domain on the interval [0, 1], which is determined by the following formula [14]: r ij =

aij - amin j amax - amin j j

 Zj - Ij þ Ij , 8i = 1, m; j = 1, n,

ð4:13Þ

where Ij, Zj are the lower and upper boundaries of the domain of the jth attribute, respectively, (0 ≤ I j ≤ Zj ≤ 1). However, with an arbitrary choice of the range and location of the domain, a meaningful interpretation of the normalized scales is lost.

78

4

Linear Methods for Multivariate Normalization

Fig. 4.4 Normalized values and relative position of domains of 11 different attributes relative to each other for the problem of choosing the location of the logistics flows, D[8 × 11]. The location selection of tri-modal LC and logistical flows [12]

Fig. 4.5 Normalized values and relative position of domains of 13 different attributes relative to each other for the problem of rating banks, D[5 × 13]. A case of ranking Serbian bank [13]

4.2

Scaling Factor Ratios

The coefficients of stretching-compression of values for each jth attribute for the Max-Min, Max, Vec, and Sum normalization methods satisfy the inequalities: 0:5

m

amax j

for aj ≥ 0 - amin j

≤ amax j

≤ i=1

a2ij

m



aij , i=1

ð4:14Þ

4.2

Scaling Factor Ratios

79

Fig. 4.6 Normalized values and mutual arrangement of domains of 7 different attributes relative to each other for the problem of selecting components in the manufacture of products, D [8 × 7]. Flexible manufacturing system selection [4]

In the above series of inequalities (4.14), the compression coefficient for the dSum method can take values from the second to the fifth position, and for the Z-score, the first and second positions, depending on the data distribution. Statistical experiment based on the results of 250,000 generations of the sample, for uniformly distributed over the interval [0; 1] data showed that in 90 and 67% of cases, respectively, inequality (4.14) for each attribute has the form: kZ

90%

≤ kMax‐Min

for aj ≥ 0

≤ kMax ≤ k Vec

67%

≤ kdSum ≤ kSum ,

ð4:15Þ

The value of the compression ratio depends on the measurement scale of each of the attributes and determines the range and density of the normalized values in the interval [0; 1] (see Figs. 4.1, 4.2). Accordingly, the highest data density in the domain is achieved for the Sum method. For offset normalization methods (dSum, Z-score, mIQR), the value of the compression factor also determines the amount of displacement of domains of different attributes relative to each other. In particular, the domain range analysis for various normalization methods based on the results presented in Fig. 4.2 shows that a high value of the compression factor for the fifth attribute made it possible to reduce the range and offset of the fifth domain in the dSum normalization method compared to the Max, Sum, and Vec normalization methods, where the difference is significant. Thus, the analysis of compression ratios during normalization is one of the components in the selection of an appropriate method for normalizing multidimensional data.

80

4.3

4

Linear Methods for Multivariate Normalization

Invariant Properties of Linear Normalization Methods

Let us define a number of properties of linear data transformation necessary for further presentation and analysis of multidimensional normalization. We write the general linear transformation in the following form: rij = kj  aij þ bj ,

ð4:16Þ

where aij, rij are the natural and normalized values of the jth attribute of the ith alternative, respectively, and kj and bj are predetermined transformation coefficients.

4.3.1

Invariance of the Dispositions of Alternatives

One of the main requirements for data normalization is to preserve the information content of the data after transformation. It is reasonable to require that the proportions of natural and normalized attribute values be preserved. The relative distance between the values of the jth attribute of the ith and kth alternative, reduced to the range of the jth attribute, is defined as the disposition of the ith and kth alternatives by the jth attribute: d ikj ðaÞ =

aij - akj , rngj ðaÞ

ð4:17Þ

where the range of the jth attribute is defined as: - amin rngj ðaÞ = max aij - min aij = amax j j , i

i

ð4:18Þ

It is easy to show that for all linear methods of normalization, the dispositions between natural and normalized values of alternatives are preserved. Property 1 The disposition of values is invariant under a linear transformation: dikj ðaÞ = dikj ðr Þ, aij - akj max aj - amin j

=

r ij - r kj max r j - r min j

, i, k = 1, . . . , m, 8j:

ð4:19Þ

4.3

Invariant Properties of Linear Normalization Methods

81

Proof dikj ðr Þ )

kj  aij þ bj - kj  akj þ bj kj  aij - akj r ij - r kj aij - akj = = = rng k j  aij þ bj kj  rng aij rng r j rng aj = dikj ðaÞ, 8p, q = 1, m, , □

Keeping the dispositions of values of the attributes of alternative after normalization means that the result of uniform scaling is similar (in a geometrical sense) to the original. But this happens only by the jth attribute (coordinate). In the one-dimensional case, linear normalization methods are combinations of each other. There is a scaling of images under a linear transformation—namely, the invariance of the dispositions of values. For multi-objective problems, linear methods produce anisotropic scaling when at least one of the scaling factors is different from the others. Preservation the dispositions of the attribute of alternatives after normalization means that when the measurement scale and bias for each attribute are changed, the normalized values are identical to natural ones. The property of preserving the dispositions of the attribute of alternative is well explained by a graphic illustration (Figs. 4.7, 4.8) of natural and normalized values, made using the technique of two dependent axes (YAxisLocation) in MatLab. The result of uniform scaling is similar (in a geometrical sense) to the original. Non-uniform scaling (anisotropic scaling) is obtained when at least one of the scaling factors differs from the others; a special case is directional scaling or stretching (in one direction).

4.3.2

Isotropic of Scaling: Invariance of Rating

Scaling with the same scale factor for each direction of the axis (scaling in the multidimensional case) does not affect the ranking result when the MCDM model uses a linear (SAW) or homogeneous function (TOPSIS, GRA) as an aggregation function. Property 2 The linear transformation uij = k∙rij + b does not change the ranking if a linear function (SAW) is used to aggregate attributes: if Qp ðr Þ > Qq ðr Þ then for uij = k  rij þ b ) Qp ðuÞ > Qq ðuÞ, 8k > 0, p, q = 1, . . . , m Proof For SAW aggregation method:

ð4:20Þ

82

4

Linear Methods for Multivariate Normalization

Fig. 4.7 Correspondence of dispositions of natural (ai2) and normalized values (ri2) for linear normalization methods. Rationing of the second attribute for 8 alternatives according to Table 4.2

Fig. 4.8 Correspondence of dispositions of natural (ai2) and normalized values (ri2) for linear normalization methods. Normalization of the second attribute for seven different alternatives according to the Table 4.2

Qi ðuÞ =

ωj uij = k j

ωj r ij þ b  1 = k  Qi ðr Þ þ b: j

then if Qp ðr Þ > Qq ðr Þ ) Qp ðuÞ = k  Qp ðr Þ þ b > k  Qq ðr Þ þ b = Qq ðuÞ: For TOPSIS aggregation method: Q i ð uÞ =

k  SiSi= Qi ðr Þ: - = þ k þ k  Si Si þ SiSþ i

then if Qp ðr Þ > Qq ðr Þ ) Qp ðuÞ > Qq ðuÞ: □ Property 3 For a homogeneous aggregation function (TOPSIS, GRA), the performance indicators of alternatives are invariant under a linear transformation with fixed coefficients (uij = k∙rij + b)

4.3

Invariant Properties of Linear Normalization Methods

Qp ðr Þ = Qp ðuÞ:

83

ð4:21Þ

Proof For TOPSIS aggregation method: Qi =

Sþ i

Si, þ Si-

where Si+ and Si- are the distances in one of the Lp-metrics from the ith object to the ideal and anti-ideal object, respectively. Whereas: Si(k∙rij + b) = kSi(∙rij), then Q i ð uÞ =

k  SiSi= - = Qi ðr Þ: k  Sþ Sþ i þ k  Si i þ Si

then if Qp ðr Þ > Qq ðr Þ ) Qp ðuÞ > Qq ðuÞ: □

4.3.3

Invariants of Numerical Characteristics of the Sample

Normalization based on a linear transformation produces a scale of normalized attribute values in which the first and second standardized moments (mean and standard deviation) and the median are scaled with the same coefficients, and the third and fourth standardized moments (skewness and kurtosis) are invariants. This has the advantage that such normalized values differ only in properties other than variability, facilitating, for example, shape comparisons. For all linear normalization methods, basic statistics are transformed as follows: (a) Average r j = k j  a j þ bj ,

ð4:22Þ

sj r ij = kj  sj aij ,

ð4:23Þ

(b) Standard deviation

84

4

Linear Methods for Multivariate Normalization

(c) Median med r ij = k j  med aij þ bj ,

ð4:24Þ

(d) Skewness—scale invariant of normalization μ3j r ij = μ3j aij ,

ð4:25Þ

(e) Kurtosis—scale invariant of normalization μ4j rij = μ4j aij ,

ð4:26Þ

(f) Correlation matrix (Pearson’s linear correlation coefficients—matrix of pair correlations between attributes)—scale invariant of normalization ð4:27Þ

corr rij = corr aij , where rij = kj  aij þ bj , X = xij , i = 1, . . . , m, j = 1, . . . , n, X j T

= x1j , x2j , . . . , xmj –j‐th column of the matrix X, xj =

1 m

m

i

i=1

μ3j = Skew xij = i

p xij , sj xij = std xij = m2j , m3j X j m2j X j

mkj xij =

3=2

1 m

, μ4j = Kurt xij = i

m

k

xij - xj , i=1

m4j X j m2j X j

2

,

4.3

Invariant Properties of Linear Normalization Methods m

cjk = corrðXÞ = corrðX j , X k Þ =

i=1 m i=1

= 1, . . . , n,

85

ðxij - xj Þðxik - xk Þ

ðxij - xj Þ

, j, k 2

m i=1

ðxik - xk Þ

2

½n × n correlation matrix:

As a “skewness coefficient” of a limited sample, we will also use the median couple indicator: MC = medcouple(x), which is defined as follows [15]: MC = med h xi , xj , xi ≤ Q 2 ≤ xj

h xi , x j =

xj - Q 2 - ð Q 2 - x i Þ , xj - xi

ð4:28Þ ð4:29Þ

where Q2 is the sample median. MC properties: 1. MC is location and scale invariant, i.e. MC(rij) = MC(aij). 2. If we invert a distribution, the median couple is inverted as well: MC(-X) = MC(X). 3. If X is symmetric, then MC(X) = 0.

Property 4 The sample skewness coefficient, median couple, kurtosis, and the matrix of pairwise correlations between features are invariants under the linear transformation: μ3j r ij = μ3j aij ,

ð4:30Þ

MCj r ij = MCj aij ,

ð4:31Þ

μ4j r ij = μ4j aij ,

ð4:32Þ

corr rij = corr aij :

ð4:33Þ

According to properties 4, the asymmetry in the original data cannot be eliminated after a linear transformation. The presence of asymmetry in certain criteria can lead to the priority of the contribution of the criterion in the performance indicator of alternatives. A possible way to eliminate asymmetry is to use non-linear normalization methods. However, this entails a violation dispositions of values of attribute.

86

4

4.4

Linear Methods for Multivariate Normalization

Re-normalization

Let’s define the procedure for re-normalization of attributes of alternatives using any of the available normalization methods as re-normalization: aij

Norm1ðaÞ

→ r ij

Norm2ðrÞ

→ vij < or > v = Norm2ðNorm1ðaÞÞ:

ð4:34Þ

Re-normalization is used in multi-step normalization methods. For example, when evaluating the weights of criteria using the entropy method [16, 17], at the first step, normalization is carried out using the Max-Min method for profit and cost criteria, and then normalization is carried out using the Sum method to normalize the intensities. rij = rij = vij =

r ij m i=1

r ij

aij - amin j

, j 2 Cþ ,

ð4:35Þ

, j 2 C - ,

ð4:36Þ

amax - amin j j amax j - aij

min amax j - aj

m

, 8i = 1, . . . , m, j = 1, . . . , n;

vij = 1:

ð4:37Þ

i=1

Another example of a multi-step normalization procedure is the dSum method (4.6 in Table 4.1), which is implemented by a combination of the Max-Min, Sum normalization methods, and twice inversion (Inv) of the normalized values: r = dSumðaÞ = Inv ðSum ðInv ðMax‐MinðaÞÞÞÞ:

ð4:38Þ

In detail: Step 1. u = Max-Min(a), Max-Min-method for all attributes, Step 2. u = 1–u+, inversion only for benefit attributes*), Step 4. v = Sum(u), Sum method for all attributes, Step 4. r = 1–v, inversion for all attributes.

4.4.1

Invariant Re-normalization Properties for Linear Methods

Property 5 Re-normalization when using any of the linear normalization methods is independent on the result of the first normalization performed using the linear methods without displacement: Max, Sum, and Vec:

4.4

Re-normalization

87

Norm2ðr Þ = Norm2ð Norm1ðaÞ Þ = Norm2ðaÞ,

ð4:39Þ

where Norm1 = {Max, Sum, Vec}, Norm2 = {Max, Sum, Vec, Max-Min, dSum, Z-score}. For example: 1. Vec(Sum(a)) = Vec(a), 2. Max-Min(Max(a)) = Max-Min(a), 3. dSum(Sum(a)) = dSum(a).

Proof rij = Norm1ðaÞ =

aij : kj

Formula (4.38) is true because linear transformations of Norm2 are homogeneous discrete functions with respect to expansion-compression f(k∙x) = f(x). For example, if Norm2 = dSum:

dSum r ij

aij = dSum =1kj

amax j kj m i=1

-

amax j kj

aij kj

-

aij kj

=1-

- aij amax j m i=1

amax - aij j

= dSum aij :

ð4:40Þ

□ Property 6 Re-normalization when using Max-Min, dSum, and Z-score methods is independent of the result of the first normalization performed using any of the linear methods: Norm2ðr Þ = Norm2ð Norm1ðaÞ Þ = Norm2ðaÞ, where Norm1 = {Max, Sum, Vec, Max-Min, dSum, Z-score}, Norm2 = fMax‐Min, dSum, Z‐scoreg: For example: 1. Max-Min(Vec(a)) = Max-Min(a), 2. dSum(Sum(a)) = dSum(a), 3. Z(dSum(a)) = Z(a).

ð4:41Þ

88

4

Linear Methods for Multivariate Normalization

Proof rij = Norm1ðaÞ =

aij - a j : kj

Formula (4.40) is correct because linear transformations of Norm2 are homogeneous discrete functions with respect to tension-compression and shear f(k∙x + b) = f(x). For example, if Norm2 = Max-Min: aij - a j = Max‐Min = kj

Max‐Min r ij

=

amax - amin j j

aijj - mðaij Þ kj sðaij Þ kj

þ0

=

-

m i=1

amax - a j j kj

amin - a j j kj

-

s

-m

ð4:42Þ

aij - a j kj

aij - a j kj

=

aij - a j kj

s

-

aij kj

mðaij Þ - a j kj

þs

a j kj

sij - m sij = Z aij , s sij

m i=1

amax j

aij - a j kj

- r ij r max j

dSum r ij = 1 -

amin - a j j kj

-

= Max‐Min aij :

r ij - m r ij = = s r ij

Z r ij

=

aij - amin j

aij - a j kj

- aij

amax - aij j

r max - r ij j

=1-

= = dSum aij

ð4:43Þ amax - a j j kj m i=1

-

- a j amax j kj

aij - a j kj

-

aij - a j kj

=1

ð4:44Þ

□ Note: The basic VIKOR method [18] uses the Max-Min normalization method. Therefore, according to property (4.39), VIKOR is not sensitive to the choice of the linear normalization method (Max-Min(Norm(a)) = Max-Min(a)).

4.5

4.5

Meaningful Interpretation of Linear Scales

89

Meaningful Interpretation of Linear Scales

The main task of normalization is to reduce the natural values of attributes to dimensionless scales for the subsequent aggregation of attributes into performance indicator of alternatives. In most problems, it is assumed that the criteria (attributes) are independent. Otherwise, it is necessary to harmonize measurement scales and, as a consequence, harmonize normalized scales. Therefore, a natural question arises: is it possible to use different normalizations for different criteria? For example, apply the Max-method for normalization of some attributes, and the Vec-method for other attributes. A feature of multivariate normalization is that the values of attributes of alternatives are normalized for each criterion separately, i.e. shear coefficients aj* and tension-compression coefficients kj are different for each criterion. This is due to the fact that the attributes of objects and the ranges of their values can be very different from each other. Therefore, for each of the features, its own normalization scale is applied and the normalized values depend on the scale of measurement and on the range of natural values of the attributes. On the other hand, the normalized values also depend on the normalization method, since the characteristic scales differ for different methods. The share of the sum in the Sum normalization method and the share of the maximum in the Max normalization method are two different shares. The meaningful interpretation of the normalized attribute values for the main linear normalization methods is as follows: Max: proportion of the attribute of the ith alternative relative to the highest value of the attribute, or the degree of approximation to the best value of 1, Sum: the share of the attribute of the ith alternative from the total result (sum) or the contrast of the ith alternative by the jth criterion ∑rij = 1. Vec: proportion of the attribute of the ith alternative relative to the diameter of the mdimensional rectangle constructed from the values of all alternatives, or the projective angle whose equilibrium value is 1/√m, Max-Min: proportion equal to the ratio of the deviation of the attribute of the ith alternative from the smallest value to the range of values of all alternatives according to the jth criterion, dSum: inverted contrast of maximizing attribute values of the ith alternative by the jth criterion, Z-score: standardized deviation of the attribute of the ith alternative from the average of all alternatives, defined in units of multiples of the standard deviation. mIQR: standardized deviation of the attribute of the ith alternative from the median value for all alternatives, defined in units of multiples of the interquartile range (robust scale). Thus, the normalized attribute values of the alternatives represent the share of the attribute in their (different) scales and depend on the normalization method. These shares may in some cases differ significantly and it is possible that the contribution of one of the attributes will dominate when aggregating particular characteristics into

90

4 Linear Methods for Multivariate Normalization

an indicator of the effectiveness of an alternative. Therefore, one of the reasons why the same normalization method is applied to different attributes is to interpret the normalized values of different attributes in the same way in order to subsequently aggregate values of the same order and not add fractions of different values. The main methods of normalization have a quite definite agreement with the geometry of the space of values or a multidimensional cloud of initial data. However, the measurement scales and the geometry of the value space do not agree in any way. Obviously, compression-stretching and shifting of the space of individual dimensions is not prohibited, since the attributes are independent. But in this case, it is necessary to justify the transformations and harmonize the normalized values of various attributes with each other in order to avoid unpredictability of the results and consequences. For non-linear methods of normalization, as a rule, the values of all attributes are transformed into the interval [0, 1] with the interpretation of the scale of normalized values as a scale of preferences or a scale of desirability.

4.6 4.6.1

Some Features of Individual Linear Normalization Methods Max-method of Normalization

The Max normalization method equalizes the maximum values of all features (=1). For the Max-method, only the lower boundary of the domains of various features is shifted. As a result, when choosing the best solution (when maximizing the integral index), the shift of the lower levels has little effect on the result for the alternative of 1st rank. However, as shown in the examples in Sect. 4.5, in the case of competition of alternatives, a rank inversion is possible due to a displacement in the lower boundary of the domains as well. The Max normalization method has the most understandable interpretation of normalized values as a fraction of the best value for each feature.

4.6.2

The Displacement of Normalized Values in Domains for the Sum and Vec Methods

The Sum and Vec normalization methods should not be used for multivariate normalization, or used only after additional bias analysis, because these methods have a potentially large displacement of different feature domains relative to each other. Sum and Vec are good one-dimensional normalization methods that have an interpretation of the contribution intensity and projective angles.

4.6

Some Features of Individual Linear Normalization Methods

4.6.3

91

Loss of Contribution to the Performance Indicator in the Max-Min Method

The Max-Min normalization method has no bias in the domains of normalized values of various features (isotropic normalization), but the presence of zero values excludes its use for some feature aggregation methods (WPM, WAPRAS, COPRAS). There is no domain displacement for the Max-Min normalization method. However, at least one of the values for each attribute is 0. This means that for some alternative, there will be no contribution to the integral indicator for this attribute. This leads to the loss of the contributions of “weak” attributes to the overall performance of the alternative. The problem with handling zero values is solved in Chap. 7 using IZ transformation of normalized values.

4.6.4

dSum Method of Normalization

For the dSum normalization method, the “upper” values of all attributes are the same and equal to 1, as well as for the Max normalization method. However, the difference in “lower” values for various attributes is not as significant as for Max. In this context, dSum is more efficient than the Max-method, but the interpretation of the dSum method is not entirely clear.

4.6.5

Z-score Method of Normalization

For the Z-score normalization method, domains are aligned on average and the variances of various attributes are the same. Z-score normalization and standard Z-scores should not be confused. Z-score is sensitive to outliers and atypical trait values. In multi-criteria decision problems, the population mean and the population standard deviation are unknown. The standard score is calculated using the sample mean and sample standard deviation as estimates of the population values. However, under conditions when the distribution law of the attribute is unknown, and also with a small number of alternatives available for observation, the information content of Z-scores is relative. Comparing and aggregating the Z-scores of two or more alternatives should be done with careful additional justification for such an operation. In the presence of multidirectional criteria, the inversion of the goal must be performed for the natural values of the attributes, i.e. before Z-score conversion. Otherwise, the average values of the benefit and cost attributes will be biased. This problem is solved in Chap. 5 using a universal data inversion algorithm. The problem with handling negative values for Z-score is solved in Chap. 7 using the transformation of normalized values.

92

4.6.6

4

Linear Methods for Multivariate Normalization

mIQR Method of Normalization

The mIQR normalization method is a robust normalization method that is weakly sensitive to outliers, which makes it preferable to Z-score for decision-making problems (small samples with an unknown feature distribution law). The domains of normalized feature values are shifted to the median of the attribute, and “scores” are expressed in units of the IQR interquartile range. IQR does not depend on the “normality” of the distribution, the presence/absence of asymmetry. However, IQR has its own serious drawback—if the distribution of a feature has a significant “tail,” then after normalization using the interquartile interval, it will add “significance” to this feature in comparison with the rest. In the presence of multidirectional criteria, the inversion of the goal must be performed for the natural values of the attributes, i.e. before the mIQR transformation. Otherwise, the median values of the benefit and cost attributes will be biased. This problem is solved in Chap. 5 using a universal data inversion ReS-algorithm.

4.6.7

mMAD-Method of Normalization

The mMAD normalization method is a fusion of the Z-score and mIQR methods. The domains of normalized feature values are shifted to the median of the attribute, and the “points” are expressed in units of the standard deviation of the feature values from the median—Median Absolute Deviation (MAD). Using MAD instead of IQR allows you to smooth out the effect of a significant “tail” in the distribution (if any), unlike mIQR, and reduce the effect of an outlier when centered to the median, unlike Z-score. For methods Z-score and mIQR, in the presence of multidirectional criteria, goal inversion must be performed for natural attribute values, i.e. before the mMAD transformation.

4.7

Conclusions

On the one hand, linear normalization methods are linear transformations of each other. However, with multidimensional normalization, the measurement scales of various features are different. This leads to the fact that even within the same method, the normalized values are different, and this subsequently affects the result of data processing. This must be taken into account when choosing a method for normalizing multidimensional data. A good basis for choosing a multivariate normalization method is a meaningful interpretation of linear normalized scales. Another selection criterion relies on the use of invariant properties of linear

References

93

normalization methods, which often eliminates simple problems and avoids obvious errors when solving MCDM problems.

References 1. Hwang, C. L., & Yoon, K. (1981). Multiple attributes decision making: Methods and applications a state-of-the-art survey. Springer. 2. Jahan, A., & Edwards, K. L. (2015). A state-of-the-art survey on the influence of normalization techniques in ranking: Improving the materials selection process in engineering design. Materials & Design, 65, 335–342. 3. Aytekin, A. (2021). Comparative analysis of normalization techniques in the context of MCDM problems. Decision Making: Applications in Management and Engineering, 4(2), 1–25. 4. Chatterjee, P., & Chakraborty, S. (2014). Investigating the effect of normalization norms in flexible manufacturing system selection using multi-criteria decision-making method. Journal of Engineering Science and Technology Review, 7(3), 141–150. 5. Çelen, A. (2014). Comparative analysis of normalization procedures in TOPSIS method: With an application to Turkish deposit banking market. Informatica, 25(2), 185–208. 6. Vafaei, N., Ribeiro, R. A., & Camarinha-Matos, L. M. (2018). Data normalization techniques in decision making: Case study with TOPSIS method. International Journal of Information and Decision Sciences, 10(1), 19–38. 7. Zeng, Q.-L., Li, D.-D., & Yang, Y.-B. (2013). VIKOR method with enhanced accuracy for multiple criteria decision making in healthcare management. Journal of Medical Systems, 37, 1–9. 8. Singh, D., & Singh, B. (2020). Investigating the impact of data normalization on classification performance. Applied Soft Computing, 97, 105524. 9. Pandey, A., & Jain, A. (2017). Comparative analysis of KNN algorithm using various normalization techniques. International Journal of Communication Networks and Information Security, 9, 36–42. 10. Alshdaifat, E., Alshdaifat, D., Alsarhan, A., Hussein, F., & El-Salhi, S. M. F. S. (2021). The effect of preprocessing techniques, applied to numeric features, on classification algorithms’ performance. Data, 6(11), 1325–1356. 11. Polatgil, M. (2022). Investigation of the effect of normalization methods on ANFIS success: Forestfire and diabets datasets. International Journal of Information Technology and Computer Science, 14, 1–8. 12. Mukhametzyanov, I. Z., & Pamučar, D. (2018). Sensitivity analysis in MCDM problems: A statistical approach. Decision Making: Applications in Management and Engineering, 1(2), 51–80. https://doi.org/10.31181/dmame1802050m 13. Stanujkie, D., Dordevie, B., & Dordevie, M. (2013). Comparative analysis of some prominent MCDM methods: A case of ranking Serbian banks. Serbian Journal of Management, 8(2), 213–241. 14. Mukhametzyanov, I. Z. (2023). Elimination of the domain’s displacement of the normalized values in MCDM tasks: The IZ-method. International Journal of Information Technology and Decision Making. https://doi.org/10.1142/S0219622023500037

94

4

Linear Methods for Multivariate Normalization

15. Brys, G., Hubert, M., & Struyf, A. (2004). A robust measure of skewness. Journal of Computational and Graphical Statistics, 13(4), 996–1017. https://doi.org/10.1198/106186004X12632 16. Wu, J., Sun, J., Liang, L., & Zha, Y. (2011). Determination of weights for ultimate cross efficiency using Shannon entropy. Expert Systems with Applications, 38(5), 5162–5165. 17. Mukhametzyanov, I. Z. (2021). Specific character of objective methods for determining weights of criteria in MCDM problems: Entropy, CRITIC, SD. Decision Making Applications in Management and Engineering, 4(2), 76–105. https://www.dmame.rabek.org/index.php/ dmame/article/view/194/75 18. Opricovic, S. (1998). Multicriteria optimization of civil engineering systems. PhD Thesis, Faculty of Civil Engineering, Belgrade.

Chapter 5

Inversion of Normalized Values: ReS-Algorithm

Abstract A review of modern methods for inverting attribute values that have opposite goals in problems of multi-criteria decision-making and multidimensional classification is presented. A detailed description of the problems of attribute value inversion is given. A universal method for inverting normalized values based on the Reverse Sorting Algorithm (ReS-algorithm) is presented. The ReS-algorithm preserves the dispositions of natural and normalized values of the attributes of alternatives and eliminates the shift in the area of normalized values. The ReS-algorithm demonstrates the lack of influence of the inverse transformation on the ranking of alternatives compared to existing inversion methods. Keywords Multivariate normalization · Optimization goal inversion · Reverse Sorting (ReS) algorithm

5.1

Optimization Goal Inversion

The standard choice problem on a finite set of alternatives is to determine the best alternative according to some criterion. As noted in Chap. 2, the target value of an attribute in the context of selection tasks can be of three types: 1. Larger-the-better (LTB), 2. Smaller-the-better (STB), 3. Nominal-the-best (NTB). Accordingly, criteria are referred to as cost criteria, benefit criteria, and target nominal criteria. The choice of direction for maximizing or minimizing the performance indicator does not affect the ranking result. A rational choice is determined by the ratio of the number of STB and LBT criteria, following the principle of reducing the number of algebraic data transformations. Normalization of target nominal criteria is performed in accordance with the choice of direction to maximize—the target nominal value is greater than others, or to minimize—the target nominal value is less (Chap. 10).

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3_5

95

96

5

Inversion of Normalized Values: ReS-Algorithm

Coordination of the direction of the criteria is achieved by inverting the goal from a minimum to a maximum or vice versa. Goal inversion is a transformation of attribute values and it is advisable to perform it at the normalization stage. This is due to the fact that the procedure for inverting values is possible using different transformations, with different results. In some cases, as shown below, it is possible to violate the basic requirements for normalizing multidimensional data: P.1 preservation of dispositions of natural and normalized values. P.2 the principle of equality of contributions of various criteria to the performance indicator of alternatives. If the MCDM model uses aggregation methods based on the feature additivity hypothesis, the result of applying various inversions affects the result of calculating the performance indicator of alternative and the final rating. This difference is due to the difference in the normalized values after applying different inversions. If the MCDM model uses aggregation methods based on the distance to the critical link, then the inversion of the goal is provided by the inversion of the ideal to the anti-ideal. In a particular case, when the largest and smallest attribute values are chosen as the ideal and anti-ideal (for example, the TOPSIS or GRA methods), it is necessary to change the maximum value to the minimum in the aggregation algorithm and vice versa. The dependence of the rating on the results of linear normalization for MCDM models based on the distance to the critical link is determined by the difference in the range of the domain of normalized values for individual attributes and does not depend on the location of the domain on the segment [0, 1].

5.2

Permissible Pairs of Transformations to the Benefit and Cost Criteria

Numerous publications and reviews on multivariate normalization in decisionmaking problems use the approach of selecting admissible pairs for benefit criteria and cost criteria to agree on the direction of various criteria [1–3]. In some cases, valid pairs do not satisfy the basic requirements of normalizing multidimensional data. Let us assume that some of the criteria are the criteria of benefit, and the other part are the criteria of costs, and the choice of the best alternative is carried out in accordance with the direction of maximizing the performance indicator of alternatives. The problem of selecting admissible pairs of methods for normalizing the criteria of benefits and costs can be formulated as follows. Let, in accordance with some principle (in the absence of criteria for choosing a normalization method), a certain normalization method is chosen to normalize the benefit criteria, for example, Max. You want to define a normalization method for the cost criteria that is consistent with the normalization method for the benefit criteria in accordance with the basic P.1 and P.2 multivariate data normalization requirements. As will be shown below, this problem has a general solution regardless of the chosen normalization method.

5.2

Permissible Pairs of Transformations to the Benefit and Cost Criteria

97

Table 5.1 Normalization method for cost criteria related with basic linear normalization method for benefit criteria

1

2

Method, short name Max

Sum

Normalization method for benefit criteria, (r) rij =

rij =

aij amax j

aij m

Inverse Method iMax1

Normalization method for cost criteria, (r*) r ij

=1-

r ij =

iMax3 Markovič4 iSum

r ij = 1 -

aij

r ij =

i=1

3

Vec

rij =

amin j aij

iMax2

iVec

m

=

4

Max-Min

rij =

5

dSum5

r ij = 1 -

6

Z-score

rij =

aij - aj sj

aij - amin j amax j

i=1

ð

r ij =

i.dSum5

r ij = 1 -

(5.1c)

(5.3)

ð1=aij Þ

2

= 1 - r ij

amin - aij j m i=1

iZ

= C - r ij

(5.1b)

(5.2)

amax - aij j ajmax - amin j

iMax-Min

Þ

C rij

1=aij m

aij 2 aij - amin j amax - amin j j amax - aij j m amax - aij j i=1

 r1ij =

# (5.1a)

ð1=aij Þ

r ij =

i=1

= 1 - r ij

amin j ajmax

1=aij m i=1

aij

aij amax j

r ij = - rij

(5.4) (5.5)

- aij Þ ðamin j

(5.6)

Table 5.1 presents methods for normalizing cost criteria, consistent with the methods for normalizing benefit criteria in accordance with the review [3]. Noteworthy are the normalization-inversion methods presented in [4, 5]. All presented normalizations for cost criteria invert the values using two main transformations: C–a and C/a, in which the constant C is defined by relations (5.1)– (5.6) in Table 5.1. Formulas (5.1), (5.4), and (5.6) in Table 5.1 make it possible to transform the cost criteria using the formulas for normalizing the benefit criteria, i.e. first perform for all criteria using the benefit criteria formula, and then perform inverse transformations of the form: C–r or C/r for cost criteria. Inversions transform data in the following way: smaller values become larger, and vice versa, larger values become smaller. Strictly monotonic functions are used for the inversion transformation in order to preserve the ordering of the values. This means that for an ordered set of values, for example, x1 < x2 < x3 < x4 the corresponding inverse values are also ordered, but in reverse order y1 > y2 > y3 > y4. This allows you to transform cost attributes into benefit attributes (and vice versa) and use the direction of maximization (minimization) for all attributes when looking for the best solution. Given the relativity of the direction and the independence of the solution from a change in the direction of optimization, the need for such a transformation is determined by the ratio between the number of benefit and cost attributes in a particular problem. For example, if the task contains 9 cost criteria and one benefit criterion [6], then it is advisable to apply the inversion for the benefit criterion and search for the optimal solution by the criterion of minimizing the integral indicator.

98

5

Inversion of Normalized Values: ReS-Algorithm

An inversion of the form C/r (or C/a) reverses the relative dispositions between attribute values before and after the inversion. Therefore, aij - akj r ij - r kj ≠ max , i, k = 1, . . . , m, 8j: min amax a r - r min j j j j

ð5:7Þ

For example, according to the ordering of x and y values adopted above, it follows: y1 - y2 1=x1 - 1=x2 x4 ðx2 - x1 Þ x1 - x2 = = : ≠ y1 - y4 1=x1 - 1=x4 x2 ðx4 - x1 Þ x1 - x4 The disposition of attribute values after inversion does not match the original disposition of values, which leads to a simple distortion of the data. As a consequence, the iMax2, iSum, and iVec inversion preserve ordering but do not preserve value dispositions.

5.3

5.3.1

Overview of Inverse Transforms and Compliance with Multidimensional Data Normalization Requirements Max Method of Normalization

The inversion formula (5.1a in Table 5.1) results in a large shift in the range of normalized values compared to the normalization of the benefit attributes. The normalized value domain for the benefit criteria: [amin/amax, 1] and the normalized value domain for the cost criteria: [0,1–amin/amax] are out of phase at [0, 1] (Fig. 5.1). Data compression is the same. The conversion of cost criteria to benefit criteria has central symmetry (“magenta” lines are shown for four values). The dispositions of natural and normalized values coincide (visualization is performed using the technique of two dependent scales). Due to the significant domain displacement during inversion (data are out of phase), the transformation (5.1a in Table 5.1) is not recommended. The inversion formula (5.1b in Table 5.1) does not change the domains of the normalized values of the cost criteria, however, it leads to some violation of the dispositions of the normalized values due to the non-linearity of the transformation (Fig. 5.2). The range of domains is the same. The conversion of cost criteria to benefit criteria does not have central symmetry (“magenta” lines are shown for four values) because a non-linear transformation is applied. This leads to some violation of the dispositions of normalized values compared to natural values. The use of the formula

5.3

Overview of Inverse Transforms and Compliance with Multidimensional. . .

99

Fig. 5.1 Inversion iMax1 = 1–r (5.1a in Table 5.1) for the Max normalization method. Initial data according to Table 2.1, third attribute

Fig. 5.2 Inversion iMax2 = rmin/r (5.1b in Table 5.1) for the Max normalization method. Initial data according to Table 2.1, third attribute

is acceptable, provided that the rating is not very sensitive to the normalization method. The inversion formula (5.1c in Table 5.1) for the Max normalization method complies with the basic multivariate data normalization requirements P.1 and P.2 (Fig. 5.3). There is no domain offset. Data compression is the same. The conversion of cost criteria to benefit criteria has central symmetry (“magenta” lines are shown for four

100

5

Inversion of Normalized Values: ReS-Algorithm

Fig. 5.3 Inversion iMax3 = Markovič (5.1c in Table 5.1) for the Max normalization method. Initial data according to Table 2.1, third attribute

values). The Markovič transform is the best of the three variants of inversion corresponding to the Max normalization method.

5.3.2

Sum Method of Normalization

The iSum inversion formula (5.2 in Table 5.1) changes the domains of the normalized values of the cost criteria and data compression, compared to the Sum normalization method, which can lead to an increase or decrease in the influence of the criterion’s contribution to the performance indicator of alternatives (Fig. 5.4). The conversion of cost criteria to benefit criteria does not have central symmetry (“magenta” lines are shown for four values) because a non-linear transformation is applied. This leads to a violation of the dispositions of normalized values compared to natural values. The use of formula (5.2 in Table 5.1) is not recommended.

5.3.3

Vec Method of Normalization

The iVec inversion formula (5.3 in Table 5.1) changes the domains of the normalized values of the cost criteria and data compression, compared to the Vec normalization method, which can lead to an increase or decrease in the influence of the criterion’s contribution to the performance indicator of the alternatives (Fig. 5.5). The conversion of cost criteria to benefit criteria does not have central symmetry (“magenta” lines are shown for four values) because a non-linear transformation is

5.3

Overview of Inverse Transforms and Compliance with Multidimensional. . .

101

Fig. 5.4 Inversion iSum (5.2 in Table 5.1) for the Sum normalization method. Initial data according to Table 2.1, third attribute

Fig. 5.5 Inversion iVec (5.3 in Table 5.1) for the Vec normalization method. Initial data according to Table 2.1, third attribute

applied. This leads to a violation of the dispositions of normalized values compared to natural values. The use of formula (5.3 in Table 5.1) is not recommended.

102

5.3.4

5

Inversion of Normalized Values: ReS-Algorithm

Max-Min Method of Normalization

The inversion formula (5.4 in Table 5.1) for the Max-Min normalization method meets the basic requirements of P.1 and P.2 multivariate data normalization (Fig. 5.6). There is no domain displacement. Data compression is the same. The conversion of cost criteria to benefit criteria has central symmetry (“magenta” lines are shown for four values).

5.3.5

dSum Method of Normalization

The inversion formula (5.5 in Table 5.1) results in a slight bias and compression of the normalized range compared to the normalization of the benefit attributes (Fig. 5.7). While keeping the best value of the attribute equal to 1, the values of the second, third, and subsequent ones change, which may lead to a change in the rating of alternatives. The conversion of cost criteria to benefit criteria has central symmetry (“magenta” lines are shown for four values). The dispositions of natural and normalized values are the same. The use of the formula is acceptable, provided that the rating is not very sensitive to the normalization method.

Fig. 5.6 Inversion iMax-Min (5.4 in Table 5.1) for the Max-Min normalization method. Initial data according to Table 2.1, third attribute

5.3

Overview of Inverse Transforms and Compliance with Multidimensional. . .

103

Fig. 5.7 Inversion idSum (5.5 in Table 5.1) for the dSum normalization method. Initial data according to Table 2.1, third attribute

Fig. 5.8 Inversion iZ (5.5 in Table 5.1) for the Z-score normalization method. Initial data according to Table 2.1, third attribute

5.3.6

Z-score Method of Normalization

The inversion formula (5.6 in Table 5.1) results in a shift in the normalized range compared to the normalization of the benefit attributes (Fig. 5.8). Data compression is the same. The transformation of cost criteria into benefit criteria has central symmetry (“magenta” lines are shown for four values). The dispositions of natural and normalized values are the same. The application of the

104

5

Inversion of Normalized Values: ReS-Algorithm

formula is acceptable, provided that the lower Z-values do not differ greatly from the upper Z-values. This means that the values of the relative mean attribute are symmetrical, which is not necessary for a sample of limited size, even in the case of a symmetrical distribution law.

5.4 5.4.1

Universal Goal Inversion Algorithm: ReS-Algorithm Reverse Sorting Algorithm

Based on the inversion methods presented in the previous section, it follows that in all cases, the inversion transformation assigns another set of values, sorted in reverse order, to the original set of values. A summary illustration is shown in Fig. 5.9. In most cases, there is a displacement in the domains of the transformed values (iMax1, iMax2, iSum, iVec, idSum), disposition violation (iMax2, iSum, iVec, idSum), and data compression (iSum, iVec, idSum). Let’s eliminate these shortcomings as follows: Step 1. Let’s order the initial dataset X = {xi}: x1 ≤ x 2 ≤ x 3 ≤ . . . ≤ x m - 1 ≤ x m :

ð5:8Þ

Step 2. Associate the ordered set (5.8) with the set Y = {yi} ordered in reverse order: y1 ≥ y2 ≥ y3 ≥ . . . ≥ y m - 1 ≥ ym ,

ð5:9Þ

whose element values are defined as follows:

Fig. 5.9 Domain displacement and data compression for different pairs of normalization method– inversion method. Initial data according to Table 2.1, third attribute

5.4

Universal Goal Inversion Algorithm: ReS-Algorithm

y1 = x m , y 2 = xm - 1 , . . . , ym - 1 = x2 , ym = x 1 ,

105

ð5:10Þ

As applied to the problem of normalizing the attributes of alternatives, the alternative with the smallest attribute value is assigned the largest value from the range of normalized values. The second alternative in the sorted list is assigned the next highest value, and so on. A suitable term for the described procedure is the reverse sorting algorithm (ReS-algorithm) [7, 8]. When data is inverted, smaller values become larger, and vice versa, larger values become smaller. The peculiarity of the ReS-algorithm: before performing the value redefinition procedure, it is necessary to save the original numbering of alternatives, and after sorting in ascending order and the value redefinition procedure, return the original numbering. As a result, all values for the cost criterion are converted to values for the benefit criterion, or vice versa.

5.4.2

ReS-Algorithm

In order for the inversion transformation to satisfy the basic requirements of normalizing multidimensional data, it is necessary to preserve the dispositions of the data and ensure that there is no domain bias. The first is achieved using a linear transformation. The second requirement can be adjusted by domain displacement. Therefore, to invert normalized values, we will use reflection relative to zero with an offset: С–r. Ordering and sorting require data indexing. In fact, this is not required, since the ordering and dispositions between the data are preserved in a linear transformation.

Fig. 5.10 Graphical illustration of the Reverse Sorting algorithm

106

5

Inversion of Normalized Values: ReS-Algorithm

Fig. 5.11 Step by step illustration of the Reverse Sorting algorithm

The scheme of the algorithm is shown in Fig. 5.10. According to the scheme, it is necessary to perform an inversion transformation that has central symmetry and does not change the position of the domain of normalized values. Therefore, after the -r inversion transformation, an offset transformation must be performed. It is easy to see that С = rmax + rmin. This allows you to equalize the lower and upper domain boundaries after inversion with the lower and upper domain boundaries before inversion (Fig. 5.11): max r ij = min v1j & min r ij = max v1j , i

i

i

i

ð5:11Þ

Finally, the resulting formula of the ReS-algorithm is: min  vij = - r ij þ r max j þ r j , 8j 2 i:C,

ð5:12Þ

where i.C is the inverted criterion. In the absence of objective quantitative criteria for evaluating the effectiveness of methods, the absence of additional priorities (after transformation) of various criteria when forming the rating of alternatives should be taken as a criterion for the effectiveness of any of the methods for transforming attribute values. It is the absence of “side” effects that makes the ReS-algorithm a universal procedure for inverting and matching the goal of a group of criteria. Figure 5.12 presents a comparative illustration of the inverse transformations according to Table 5.1 and inversions using the ReS-algorithm.

5.4

Universal Goal Inversion Algorithm: ReS-Algorithm

107

Fig. 5.12 Comparative illustration of inverse transformations according to Table. 5.1 and inversions using the ReS-algorithm. Initial data according to Table 2.1, third attribute

The graphical results demonstrate the apparent absence of domain displacement in the inversion based on the ReS-algorithm. “Side” effects of the main inverse transformations according to Table 5.1 are as follows: the iMax (5.1b in Table 5.1), iSum and iVec inversion methods change the dispositions of natural values, iSum, iVec, idSum, and iZ shift the domain of normalized values. The inverse transformation 1–r is not recommended to be used in conjunction with any of the normalization methods (except for the Max-Min method) due to a strong shift in values.

5.4.3

Basic Properties of the ReS-Algorithm

1°. The ReS-algorithm is a linear transformation with displacement. The linearity of the transformation determines the preservation of the dispositions of the inverted values. A shift by rmax + rmin, which is quite specific for each normalization method, provides an invariant position of the domain before and after the inversion of values. 2°. The ReS-algorithm inverts any types of attributes—benefit criteria, cost criteria, and target nominal criteria while maintaining properties 1°. 3°. The ReS-algorithm does not depend on the choice of normalization method and makes an ideal pair for any normalization method, including non-linear ones, on a set of benefit and cost attributes. Notes: (1) The inversion of values can be performed for the natural values of features using a formula similar to formula (5.13): min  uij = - aij þ amax j þ aj , 8j 2 i:C,

ð5:13Þ

and then you can normalize by any method. Then, the sequential transformation looks like:

108

5

Inversion of Normalized Values: ReS-Algorithm

r = NormðReSðaÞÞ:

ð5:14Þ

Such a sequential conversion is not equivalent to the conversion: r = ReSðNormðaÞÞ:

ð5:15Þ

because the ReS-algorithm is a linear transformation with a displacement. For the case r = Norm(ReS(a)), the dispositions of natural and normalized values are preserved, however, there is a slight shift in the domain and a change in the range of cost attributes relative to the normalized values obtained by the Sum, Vec, dSum, Z-score methods (except for Max and Max-Min). In particular, if the Norm() transformation is linear, then according to the invariant property (4.39) and (4.41), the result is preserved only for the Max and Max-Min normalization methods: ReSðMaxðaÞÞ = MaxðReSðaÞÞ,

ð5:16Þ

ReSðMax - MinðaÞÞ = Max - MinðReSðaÞÞ:

ð5:17Þ

(2) The inverse transformation of the normalized iMax-Min values using Eq. (5.4 in Table 5.1) is the same as the transformation (5.8) based on the ReS-algorithm. Indeed, for the Max-Min normalization method rjmax = 1, rjmin = 0. Then vij = 1 - rij = 1 -

aij - amin j amax j

- amin j

=

- aij amax j amax - amin j j

,

ð5:18Þ

(3) The inverse transformation of the normalized iMax3 values using the Markovič Eq. (5.1c in Table 5.1) is the same as the transformation (5.8) for the Max normalization method. Indeed, for the Max normalization method in Eq. (5.1c in Table 5.1), rjmax = 1. Then vij = 1 -

aij - amin j = 1 - r ij þ r min j , amax j

ð5:19Þ

(4) Data inversion should not be associated with the transformation of cost attributes into benefit attributes. Inversion transforms the data in the following way: smaller values become larger, and vice versa, larger values become smaller. Coordination of the direction of the criteria is achieved by inverting the goal from a minimum to a maximum or vice versa. The choice of direction for maximizing or minimizing the performance indicator does not affect the ranking result. Given the relativity of the direction and the independence of the solution from the change in the direction of optimization, the rational choice of which data to invert—for cost or benefit criteria, is determined by the ratio between the number of benefit and cost attributes in a particular problem.

References

109

The ReS-algorithm is a universal method for inverting attribute values of all types, does not depend on the normalization method, and allows you to agree on the direction of optimization. The result of such a transformation does not affect the ranking of alternatives.

5.5

Conclusions

The ReS-algorithm is a simple and effective goal inversion method for multi-criteria problems, which allows one to coordinate the direction of optimization for multidirectional criteria. The transformation preserves the dispositions of the natural and normalized attribute values and preserves the position of the domain of the normalized values before and after the inversion. The simplicity of the formula of the ReS-algorithm, its universality and efficiency make all existing inversion methods unnecessary. The successive inversion-normalization and normalization-inversion transformations are not equivalent in some cases. The question of the sequence of transformations and the consequences in the results require further study.

References 1. Chatterjee, P., & Chakraborty, S. (2014). Investigating the effect of normalization norms in flexible manufacturing system selection using multi-criteria decision-making method. Journal of Engineering Science and Technology Review, 7(3), 141–150. 2. Vafaei, N., Ribeiro, R. A., & Camarinha-Matos, L. M. (2018). Data normalization techniques in decision making: Case study with TOPSIS method. International Journal of Information and Decision Sciences, 10(1), 19–38. 3. Jahan, A., & Edwards, K. L. (2015). A state-of-the-art survey on the influence of normalization techniques in ranking: Improving the materials selection process in engineering design. Materials & Design, 65, 335–342. 4. Markovič, Z. (2010). Modification of TOPSIS method for solving of multicriteria tasks. Yugoslav Journal of Operations Research, 20(1), 117–143. 5. Zeng, Q.-L., Li, D.-D., & Yang, Y. B. (2013). VIKOR method with enhanced accuracy for multiple criteria decision making in healthcare management. Journal of Medical Systems, 37, 1–9. 6. Rezk, H., Mukhametzyanov, I. Z., Al-Dhaifallah, M., & Ziedan, H. A. (2021). Optimal selection of hybrid renewable energy system using multi-criteria decision-making algorithms. Computers, Materials & Continua, 68, 2001–2027. 7. Mukhametzyanov, I. Z. (2020). ReS-algorithm for converting normalized values of cost criteria into benefit criteria in MCDM tasks. International Journal of Information Technology and Decision Making, 19(5), 1389–1423. https://doi.org/10.1142/S0219622020500327 8. Mukhametzyanov, I. Z. (2023). On the conformity of scales of multidimensional normalization: An application for the problems of decision making. Decision Making: Applications in Management and Engineering. https://doi.org/10.31181/dmame05012023i

Chapter 6

Rank Reversal in MCDM Models: Contribution of the Normalization

Abstract Despite applying the same transformation, the values of each feature are transformed independently during multivariate normalization. In particular, for linear normalization methods for each feature, the compression and shift coefficients are individual and are determined by a set of feature values of different alternatives, “measured” in the scale of this feature. As a result, it is possible to shift the domains of the normalized values of various features relative to each other and different data compression. This entails the priority of the contribution of individual criteria in the performance indicator of the alternatives and is one of the significant reasons for the variation in the ranking results depending on the normalization method. This chapter concentrates on an analysis of the reasons why the ordering of alternatives is different for different normalization methods. Attention is focused on the analysis of various multivariate normalization procedures and the contribution of individual attributes to the performance of alternatives. Several indicators are proposed that determine the priority of the contribution of individual features to the performance indicator of the alternatives. Keywords Multivariate normalization · Rank reversal due to normalization · Relative preference for different normalizations

6.1

Main Factors Determining Rank Reversal in MCDM Problems

Let us turn again to the multi-criteria decision-making model outlined in Chap. 2. The MCDM rank model for each alternative Аi on the set of criteria Cj determines the value Qi—an indicator of efficiency, on the basis of which the ranking of alternatives and subsequent decision-making is carried out: Q = f ðA, C, D, ω, 0 norm0 , 0 dm0 , 0 par 0 Þ:

ð6:1Þ

The main factors that determine the ranking result are determined by the arguments of the ranking model. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3_6

111

112

6

Rank Reversal in MCDM Models: Contribution of the Normalization

The first factor is determined by the choice of a set of alternatives and is defined in the MCDM theory as the rank inversion phenomenon: the preference order of alternatives changes when an alternative is added to or removed from the decision problem [1–5]. Belton and Gere were the first to point out rank change in the analytic hierarchy process (AHP) [4], Wang and Luo described that rank change can occur in simple additive weighting (SAW) and TOPSIS [4], Wang and Triantaphyllou pointed out possible rank reversal for methods of the ELECTRE group [6], Mareschal et al. pointed out the same problem for the PROMETHEE group of methods [7]. Permutations of ranks violate the principle of invariance of utility theory, and this problem currently does not have a complete understanding and solution, except for individual special cases. The vector of weight coefficients of the criteria directly determines the degree of contribution of individual features to the integral performance indicator of the alternative. Therefore, in MCDM it is important to have a reliable method for determining the weighting factors. The remaining parameters of the model (6.1) affect the result of ranking alternatives to a lesser extent, but for a large number of tasks their joint (complex) influence takes place. It will be shown below that the criteria weights (w) determined in accordance with the selected weighting method, the aggregation method ( f ), the normalization method (norm), and the model parameters (par) only in conjunction with the decision matrix D determine the result of ranking of alternatives. The methods of each individual group in the multi-parameter MCDM procedure, for example, different weighting methods, or different normalized value aggregation methods, or different normalization methods, are correlated with each other. This means they give on average the same solutions. However, in some cases, significant deviations of the results appear, determined by a combination of weighting, aggregation, normalization, and matrix D methods. Such situations are classified as a situation with a high sensitivity of the decision to the initial data and the choice of the design of the MCDM model. In particular, objective weighting methods [8–16] are highly sensitive to D and, as a result, the rating of alternatives changes when weighted normalized values are aggregated. The simple weighted sum method (SAW) is sensitive to D if the sum of the contributions to the performance score of the different attributes of the two alternatives is about the same. In this case, as soon as the weights of the criteria are slightly changed, or a different method of aggregation or normalization of the decision matrix is applied, the rank reversal is possible.

6.2

Relative Preference for Different Normalizations

The main task of normalization is to reduce the natural values of attributes to dimensionless scales for the subsequent aggregation of attributes into a performance indicator of alternatives.

6.2

Relative Preference for Different Normalizations

113

Everyone knows that it is impossible to sum up quantities defined in different units of measurement, for example, weight (kg) and cost ($). In contrast, aggregation (for example, a simple sum) of normalized attribute values is considered a perfectly valid operation. However, despite the fact that when aggregating the normalized values of various attributes, the source data is dimensionless, you still add “kilograms and currency units.” Normalized numeric attribute values continue to store unit information. For example, if normalization is performed by dividing by the largest value, then for weight the values will characterize a fraction of the largest weight, and for cost, a fraction of the largest cost. A rational analysis of such a situation allows us to conclude that 0.7 + 0.3 is not the same as 0.3 + 0.7. 0:7 þ 0:3 = 0:3 þ 0:7 ð?Þ Indeed, let’s scale the second indicator with a factor of 2. This corresponds to a double increase in the largest value of the second attribute for one of the alternatives. Get 0:7 þ 0:15 > 0:3 þ 0:36: This deviation can be significant. Upon detailed analysis, it turns out that the hidden reason for the change in ranks during normalization is that some features were unconsciously placed in a privileged position at the normalization stage and began to influence the result much more strongly. To resolve this situation, researchers compensate for scaling by introducing different weights for different features. However, this means that for different normalizations, the weighting coefficients must also be different. When aggregating normalized values, each attribute contributes to the performance of alternatives, and this contribution is different for different attributes—the principle of additive significance of attributes of alternatives (Hwang & Yoon, 1981) [17]. For all linear normalization methods, the compression ratios and the offset parameter for linear normalization depend on the measurement scale and on the range of natural attribute values. The effect of shifting the domains of normalized values relative to each other and different data compression are manifested. Below, such an effect will be illustrated in detail with examples. As a result, the contribution of the attributes of different criteria to the performance indicator of alternatives will be different. This results in one or more criteria taking precedence over the others. So it’s possible that “kilograms” will dominate your result despite doing the normalization. The difference in compression ratios for various attributes during normalization causes another negative effect. The distances between the normalized values for different attributes also depend on the compression ratios. This again leads to the situation that, as a result, it will be necessary to aggregate different numerical values of different attributes.

114

6

Rank Reversal in MCDM Models: Contribution of the Normalization

Thus, the ranking result does not simply depend on the normalization method or one simple formula applied equally to all attributes, but is determined by the relationship between the normalized values of the various attributes. For linear methods of multidimensional normalization, the characteristic scales aj* and kj determine the displacement and stretching-compression of attribute values along the jth coordinate. Since the attributes of objects and the ranges of their values can differ greatly from each other, it is reasonable to apply their own scale for each of the features, i.e. private statistics aj* and kj. In this case, the normalizations are not “isotropic,” that is, they compress the data cloud more strongly in some directions and less in others. However, despite some violation of the data structure (mutual distances), this approach is considered generally accepted. It is believed [18–23, etc.] that the normalization method is adequate if normalization equalizes the impact levels of all criteria regardless of the weighting process and does not cause problems with changing the rank of alternatives. In fact, these requirements are desirable. In particular, for linear normalization methods for each feature, the compression and shift coefficients are individual and are determined by a set of feature values of different alternatives, “measured” in the scale of this feature. Therefore, for various problems of multi-criteria choice, the normalization method determines a different level of displacement of the domains of normalized values of various features relative to each other and different data compression and can cause a change in the rank of alternatives. In the absence of criteria, the preference for different normalizations is relative.

6.3

Assessing the Contribution of an Individual Attribute to the Performance Indicator of an Alternative

For rank-based MCDM methods, the ranking of the alternatives is done according to the position number in the ordered list of the alternative performance measure Qi. A direct calculation of the contribution of an individual attribute to the performance indicator of an alternative can be performed for cost measurement methods (group G1) or additive methods (SAW). Without taking into account the weight of the criterion, the performance indicator is determined by the formula: n

Qi =

r ij :

ð6:2Þ

j=1

where rij are the normalized values of the natural values of attributes aij, obtained using one of the Norm() normalization methods: Therefore, the contribution of an individual feature to the performance indicator of an alternative is determined by the following matrix:

6.3

Assessing the Contribution of an Individual Attribute to the. . .

cQ = ðr ij =Qi Þ  100%,

115

ð6:3Þ

where “c” is an abbreviation of the term “contribution,” and “Q” is the designation for the indicator of the effectiveness of alternatives (cQ—contribution to the performance indicator). Each row of the matrix cQ is defined by a vector: n

cQi = ðr i1 , r i2 , . . . , r in Þ=

r ij  100%:

ð6:4Þ

j=1

Expression (6.4) is a normalization of the feature vector of the ith alternative and has an interpretation of the intensity (in %) of the contribution of the criteria to the performance indicator of alternative. This intensity is generated by the natural correlation of feature values. But it, however, directly depends on the normalization method and allows you to compare the result of different normalization methods. The idea behind the comparison is quite simple: • compare two normalization methods: r(1) = Norm1(a) и r(2) = Norm2(a), • it is obvious that the domain shift and different data compression for different normalization methods are different. This will lead to a change in the intensities of the contribution, • the task is to evaluate the influence of the normalization method on the contribution of features to the result, • since the purpose of the multi-criteria choice is to determine one or more alternatives of the highest rank for the final choice by the decision maker, then we will compare the intensities for alternatives of the first rank. When the ratings of alternatives of the first, second, etc. are weakly distinguishable rank, we also use the intensity of these alternatives for analysis if necessary. How to compare two sets of intensities cQk(1), cQk(2)? 1. According to the largest deviations of the intensities of individual features. 2. By median values med()—robust characteristic. 3. By deviation from the median value—search for a suitable formula for determining the boundaries of the “confidence interval.” Although the comparison criteria are not defined (we cannot formulate which of the normalization options is better or worse), the results of the comparison provide additional important information. We follow the basic principle that normalization should not lead to prioritizing the contribution of individual criteria to the performance of alternatives. Let us explain how the comparison is performed with an example. Suppose that for two different normalization methods, the alternative with number k has the first rank. Let the contribution vector to the efficiency indicator for these two normalization methods be: cQk(1) = (15, 17, 30, 12, 26), cQk(2) = (18, 14, 25, 12, 20), (%).

116

6

Rank Reversal in MCDM Models: Contribution of the Normalization

1. The greatest change in the contribution intensities takes place for the third and fifth features by 5 and 6 points, respectively. 2. med(1) = 17, med(2) = 18 are approximately the same. 3. The largest deviation from the median value Δ(1) = 30–17 = 13, Δ(2) = 25–18 = 7. The conclusion on the result of the analysis is as follows: the choice of the normalization method did not affect the rating; the contribution of features to the result in both cases is approximately the same. From the standpoint of equal contribution for all criteria, the second normalization method is preferable. If the consequence of the choice of the normalization method is rank inversion, then two pairwise comparisons for the intensity vectors must be performed to analyze the cause. Let for the normalization of Norm1, the alternative with number k have the first rank, and for the normalization of Norm2, the alternative with number s have the first rank. It is necessary to compare two pairs cQk(1) и cQk(2), cQs(1) и cQs(2). Such a comparison makes it possible in some cases to reveal the cause of the rank inversion. To determine the appropriate formula for determining the boundaries of the “confidence interval,” we use the approach described in the second chapter, Sect. 3.4. The approach is based on the idea of detecting significant deviations in the intensities of contributions, as in the case of anomalous values or outliers. The outlier identification technique is based on the interquartile method. Outliers are data that are more than 1.5 interquartile ranges (IQR) below the first quartile or above the third quartile. We use the calculation of the boundaries of the “confidence interval,” taking into account the asymmetry of the distribution, but in order for it to be equal to the same 1.5∙IQR for the symmetrical case. The calculation of the boundaries of the “confidence interval” taking into account the asymmetry of the distribution was proposed in [24]. All observations outside the interval for MC ≥ 0: q1 - 1:5  e - 4MC  IQR, q3 þ 1:5 e3MC  IQR ,

ð6:5Þ

and all observations outside the interval for MC < 0: q1 - 1:5  e - 3MC  IQR, q3 þ 1:5 e4MC  IQR ,

ð6:6Þ

will be flagged as a potential outlier, where q1, q3 are the 25% and 75% percentiles of the cQ vector, respectively; IQR = q3–q1—interquartile range; MC is the “skewness factor” determined by the median couple (MC) function [24]: MC =

med

xi ≤ medðxÞ ≤ xj

h xi , x j :

ð6:7Þ

6.4

Rank Reversal Due to Normalization

h xi , xj =

xj - medðxÞ - ðmedðxÞ - xi Þ : xj - xi

117

ð6:8Þ

where med(x) is the sample median. Some problem of using the described approach is the dimension of the feature vector. The dimension of the sample matters for the correct determination of outliers. Correct results can be achieved for n ≥ 6. For the above example, the “confidence interval” for the vector cQk(1) was [11.0, 99.6], and the “confidence interval” for the vector cQk(2) was [1.9, 32.9]. Another example shows the presence of an outlier in intensity. For cQ = (15, 5, 9, 17, 15, 27, 13) “confidence interval” was [0.25, 26.5] and, therefore, the value of the intensity of the contribution of the sixth feature, equal to 27%, is overestimated and this may be due to normalization. For aggregation methods based on the reference level (G2) and methods of superiority (G3), it is difficult to determine the contribution of an individual feature to the performance indicator of an alternative. For this group of methods, we will also use the method of estimating the contribution as in the case of the additive approach. This conclusion follows from the correlation of ranking results (for a significant number of selection problems) performed using methods G1–G3.

6.4

Rank Reversal Due to Normalization

Statement Local priorities of the alternatives with respect to the criteria, coupled with the normalization method (difference in data compression ratios and domain bias) are the main reasons for the change in the rank of alternatives due to normalization. What is meant by local priorities of alternatives relative to criteria. This means that the selection problem is defined by conflicting alternatives—alternative Ap surpasses alternative Aq in some group of features, and vice versa in another group of features. The local priorities of alternatives relative to the criteria represent a natural correlation between the features of different alternatives and cannot be changed (except for outliers in the data). Therefore, it should be understood that none of the normalization methods can be recognized as more effective than the other without being tied to a specific task. What criteria should be followed when choosing a normalization method? The recommendation is: do not use methods that will lead to the priority of the contribution of individual criteria. The effectiveness of the normalization method is postulated by a set of positive properties that determine the basic principles (necessary properties) of normalizing multidimensional data (see Chap. 3).

118

6

Rank Reversal in MCDM Models: Contribution of the Normalization

Fig. 6.1 Rank reversal during normalization due to local priorities of alternatives. Same weights. SAW method of aggregation

An illustrative example (Fig. 6.1) of local priority of alternatives and rank reversal is demonstrated by a choice problem with a decision matrix:

D = aij =

6500 5800

85 83

667 564

140 145

1750 2680

4500 5600

71 76

478 620

150 135

1056 1230

4200

74

448

160

1480

5900 4450

80 70

610 479

163 151

1650 1059

6000

81

580

178

2065

:

ð6:9Þ

Alternative A3 has priority over alternative A8, and by criteria 3 and 5, and vice versa, alternative A8 has priority over alternative A3, and by criteria 1, 2 and 4. For Max, Sum, and Vec normalization methods, alternative A3 has rank 1, and for Max-Min and dSum normalization methods, alternative A8 has rank 1. In all cases, the criteria weights are the same and the SAW aggregation method is used. An assessment of the contribution of individual features to the performance indicator of alternatives A3 and A8 is presented in Table 6.1. Feature 5 (criteria) makes the greatest contribution to the rating of alternative A3, and feature 4 makes the largest contribution to the rating of alternative A8. In Fig. 6.1 these criteria are highlighted in red (vertical lines). The second most important contribution is made by 3 and 1 features, respectively (blue vertical lines). This difference is not significant. Therefore, there is no reason to prioritize any of the normalization methods used. Both alternatives A3 and A8 qualify for selection.

6.4

Rank Reversal Due to Normalization

119

Table 6.1 Contribution of criteria to the performance indicator of alternatives of the first rank Norm() Rank Alternative A3 Max r1 Sum r1 Vec r1 Max-Min r5 dSum r6 Alternative A8 Max r3 Sum r6 Vec r6 Max-Min r1 dSum r1

cQj, %

Cj

Max

Med

Conf. Interval

16 15 15 6 18

19 17 17 0 17

22 21 21 37 22

19 18 18 15 20

23 30 29 43 23

#5 #5 #5 #5 #5

23.1 29.9 29.0 42.7 22.9

19.5 17.8 18.0 14.9 19.7

[13, 29] [14, 58] [14, 56] [-11, 165] [13, 32]

21 21 21 24 20

22 20 20 22 20

19 18 18 12 18

23 22 22 31 22

14 19 19 12 19

#4 #4 #4 #4 #4

23.2 21.9 22.2 30.6 21.7

21.5 19.6 19.9 21.8 20.2

[-25, 23] [18, 27] [15, 26] [-14, 40] [15, 23]

If several alternatives have local priorities according to different criteria, then a situation is possible in which the performance indicators of such alternatives will differ slightly. The alternatives are hardly distinguishable. Therefore, to determine the priority of alternatives, it is not enough to compare the absolute values of the efficiency indicator Qi. In addition, attribute values may be inaccurate. For example, an attribute may be approximately measured, the data source may be unreliable, the measurement was made in error, the measurements for different alternatives were measured by different methods, some attributes may be random values or determined by interval values, etc. Under such conditions, the solution is susceptible to error at evaluation of the initial values of the attributes. You can recognize a situation with a high decision sensitivity using the relative performance indicator of alternatives (relative PI): dQp =

Qp - Qpþ1  100%, p = 1, . . . , m - 1, rngðQÞ

ð6:10Þ

where Qp is the value of the performance indicator corresponding to the pth rank alternative, rng(Q) = Q1–Qm. The dQ score is the relative (given in the Q scale) gain or loss of the performance score for the ordered list of alternatives. We believe that two alternatives whose relative growth dQ differ less than the value of a given a priori error should be considered indistinguishable. For the example above, the relative rating gap dQ provided in Table 6.2 provides additional information. Given the small value of the relative rating gap, for the Max, Sum, and Vec normalization methods, alternative A7 should also be recommended as a possible solution.

120

6

Rank Reversal in MCDM Models: Contribution of the Normalization

Table 6.2 Relative rating gap dQ for different normalization methods rank 1 2 3 4 5 6 7 8

Max #Ai 3 7 8 6 5 1 4 2

dQi, % 3.9 1.7 0.4 10.1 13.8 1.9 68.2

Sum #Ai 3 7 6 5 4 8 1 2

dQi, % 2.4 16.5 2.2 4.1 2.8 8.4 64.6

Vec #Ai 3 7 6 5 4 8 1 2

dQi, % 2.6 14.3 2.8 6.3 0.1 10.0 66.0

Max-Min #Ai dQi, % 8 29.0 6 22.3 1 8.7 5 16.1 3 6.1 7 6.1 2 12.7 4

dSum #Ai 8 6 1 5 2 3 7 4

dQi, % 30.2 14.0 21.1 6.8 10.6 6.5 10.7

Thus, in conditions of high competition of alternatives and the sensitivity of the rating to small deviations in the evaluation of the decision matrix, the choice of the normalization method may slightly change the value of the efficiency indicator, which will lead to a change in the ranking of alternatives. As a proof of the assertion given at the beginning of this section, we use the counter-example technique. Let us select examples of problems in which the use of 5 basic linear normalization methods in one case gives the first rank to only one alternative, and in another example, all alternatives of the first rank will be different. It is necessary to generate a decision matrix that is sensitive to the choice of normalization method (with other parameters of the decision model being the same). The technique for generating such a decision matrix D1 is based on the generation of random values (uniform law). It is necessary for each attribute to generate m random values (m alternatives) from the range of values determined by setting the range of values. This range is defined by the achievable range from smallest to largest. The algorithm for generating such a matrix has the following simple form: Let D[m × n] matrix. rng = max(D)-min(D) [1 × n] – attribute range vector, range = repmat(rng,m,1) [m × n] – attribute range matrix, t0 = repmat(min(D),m,1) [m × n] – matrix of the lower bounds of attributes, t = rand(m,n) range(m,n) [m × n] – matrix: direct product of a random matrix and a range matrix, D = t0+ t. The decision matrix D1 obtained in this way represents the same decision problem as defined by the matrix D0, but with a different set of alternatives. Next, for each such decision matrix, ranking is performed using the selected aggregation method with variations in the normalization procedure and with other parameters of the MCDM model fixed (for example, criteria weights). The iterative search procedure D1 ends when, for the selected set of normalization methods, all alternatives of 1-rank satisfy a given requirement, for example, all are different or all are the same.

6.4

Rank Reversal Due to Normalization

121

Fig. 6.2 The relative position of the domains of the normalized values and the decision matrix for which the I-rank alternatives are the same for the 5 main linear normalization methods. TOPSIS aggregation method. Equal criteria weights

The first example (Fig. 6.2) demonstrates the independence of the rating from the choice of normalization method. The heading of each variant (subplot) includes the normalization method and the numbers of alternatives of I, II, and III ranks, respectively. For this example, a decision matrix has been generated with a range similar to that of the decision matrix in Table 2.1. The relative difference in performance indicators for all normalization methods is high, which is sufficient for distinguishing alternatives (Table 6.3). It is the weak competition of alternatives that is the stabilizing factor for the weak sensitivity of the rating to the choice of the “aggregation method–normalization method” model. Figure 6.3 shows the results of solving the same problem for the SAW, TOPSIS, GRA, WPM, WAPRAS, and COPRAS aggregation methods. In

122

6

Rank Reversal in MCDM Models: Contribution of the Normalization

Table 6.3 Values of indicators of efficiency of alternatives of I–III ranks

Normalization method Max Sum Vec Max-Min dSum

Numbers of alternatives of I–III ranks 1 2 3 7 3 4 7 3 4 7 3 4 7 8 6 7 8 6

Performance indicator of alternatives of I– III ranks Q1 Q2 Q3 0.70 0.63 0.59 0.73 0.70 0.65 0.73 0.69 0.65 0.68 0.61 0.56 0.66 0.63 0.56

Relative change of Q, % dQ1 dQ2 dQ3 19.0 11.3 1.5 7.6 8.9 8.7 8.5 9.2 8.2 28.3 19.7 18.3 13.6 27.6 5.5

Intensity Q, % iQ1 iQ2 iQ3 15.8 14.2 13.3 16.0 15.2 14.3 16.0 15.1 14.2 16.0 14.4 13.3 15.6 14.8 13.2

TOPSIS method of aggregation

Fig. 6.3 The relative position of the domains of normalized values and the ranking of alternatives for the decision matrix D0 using 30 “aggregation-normalization” methods. Weak sensitivity of the problem to local priorities of alternatives

fact, 30 different “aggregation–normalization methods” models are used to solve the original problem (see Sect. 2.5). For all aggregation methods (except COPRAS (Max-Min)) alternative A7 has the first rank. A larger number of variants of the aggregation-normalization model also produce the same priority of alternatives of the second rank (A3 and A8) and the third rank A6. The following example (decision matrix D1 obtained using the generation technique described above) demonstrates the strong dependence of the rating on the choice of normalization method. If several alternatives have a part of the attributes “strong” and approximately the same part is “weak,” then the performance indicators of such alternatives will differ slightly, and the alternatives are hardly distinguishable. Under such conditions, the effect of the influence of the normalization method on the result of attribute aggregation and ranking of alternatives is clearly manifested. Under such conditions, the sensitivity of the solution to an error in estimating the initial values of the attributes is observed.

6.4

Rank Reversal Due to Normalization

123

Fig. 6.4 The relative position of the domains of the normalized values and the decision matrix for which the I-rank alternatives are different for the 5 basic linear normalization methods. TOPSIS aggregation method. Equal criteria weights

Figure 6.4 shows an illustration of the relative position of the domains of normalized values of a specially generated decision matrix on a computer (with a range of values similar to that of the decision matrix in Table 2.1 and in the first example). For this example, all alternatives of the 1st rank are different for the five basic linear normalization methods. Figure 6.4 shows a situation in which some of the attributes are “strong” and approximately the same part is “weak.” The performance indicators of such alternatives will differ slightly, and therefore the alternatives are hardly distinguishable. For the presented example, the values of performance indicators are presented in Table 6.4. The relative difference in performance indicators for some normalization methods does not exceed 1–3%. In such a situation, ranking alternatives by absolute value is questionable.

124

6

Rank Reversal in MCDM Models: Contribution of the Normalization

Table 6.4 The values of the performance indicators of alternatives of I–III ranks for the example in Fig. 6.3

Normalization method Max Sum Vec Max-Min dSum

Numbers of alternatives of I–III ranks 1 2 3 4 8 1 1 8 4 8 1 4 6 2 4 2 1 4

Performance indicator of alternatives of I–III ranks Q2 Q3 Q1 0.72 0.704 0.700 0.74 0.737 0.735 0.74 0.738 0.730 0.63 0.614 0.610 0.64 0.604 0.602

Relative change of Q, % dQ1 dQ2 dQ3 3.2 1.1 8.1 0.1 1.4 11.0 0.2 0.7 11.2 6.3 3.5 8.9 16.9 0.8 3.1

Intensity Q, % iQ1 iQ2 iQ3 16.1 15.7 15.6 16.6 16.5 16.4 16.5 16.4 16.3 14.2 13.8 13.6 14.4 13.7 13.6

TOPSIS method

Fig. 6.5 Mutual arrangement of domains of normalized values and ranking of alternatives for decision matrix D1 using 30 “aggregation method-normalization method” models. High sensitivity of the problem to local priorities of alternatives

It should be noted that the performance indicator in the above example is sensitive to the assessment of feature values. If the decision matrix is rounded to integer values, there will be a reversal of the first and second ranks for the Vec normalization method. The reversion of the rank is due to the sensitivity of the problem to the local priorities of alternatives and is manifested in the weak distinguishability of performance indicators (highlighted in Table 6.4). The strong competition of alternatives is a factor in the high sensitivity of the rating to the choice of the “aggregation method– normalization method” model. Figure 6.5 shows the results of solving the same problem (decision matrix D1) for the SAW, TOPSIS, GRA, WPM, WAPRAS, and COPRAS aggregation methods. In a situation of high sensitivity of the problem to the local priorities of alternatives and taking into account the high degree of correlation of various aggregation methods, there is a change in ranks for other aggregation methods. So for the SAW aggregation method, a change in the first rank alternative is observed in four out of

6.4

Rank Reversal Due to Normalization

125

Table 6.5 Pair correlation matrix of features for decision matrices D0 and D1 Corr(D0) – 0.904 –

mean std

0.419 0.365

0.945 0.771 –

0.117 0.141 -0.085 –

0.397 0.654 0.178 -0.004 –

Corr(D1) – -0.400 –

-0.545 0.203 –

-0.400 -0.305 0.369 –

-0.117 0.430 0.475 -0.130 –

0.338 0.145

five normalization options, and for the WPM, WAPRAS, and COPRAS methods in three cases. Note that for each of the aggregation methods (SAW, WPM, TOPSIS, GRA PROMETHEE, etc.), using the decision matrix generation technique described above (problem generation), one can obtain a 1-rank alternative rotation for all independent decision matrix normalization methods. Since the generated matrices have a similar range as for the decision matrix D0 (Table 2.1), the domains and their relative positions (Figs. 6.2, 6.3, 6.4, 6.5) almost coincide. This means that the ranking result depends on the ratio between the values of the original decision matrix and is determined by the local priorities of the alternatives for various attributes. Thus, not only the aggregation method and the normalization method determine the result. The change in rank is determined to a large extent by the ratio of the values of the original decision matrix or is determined by the local priorities of alternatives for various attributes. Given the limited set of methods, it is not difficult to perform decision analysis for various “aggregation method–normalization method” models. If the results remain unchanged, they are called reliable, otherwise they are sensitive. In the latter case, a sensitivity analysis is required and/or several alternatives for the final rating to be proposed. What indicators can be used to evaluate the high sensitivity of the problem to local priorities of alternatives from normalization. It is expected that high sensitivity is accompanied by low feature correlation in the decision matrix. A low value of pair correlation corresponds to a situation where the dominance of alternatives for each pair of features is uniform (for example, in accordance with the sign change criterion). In this case, the sums of the attributes of alternatives will be approximately equal. As soon as the local priorities of the alternatives change, for example, due to the choice of the normalization method (data compression coefficients and bias), the ranking of the alternatives may also change. This statement is a hypothesis put forward by the author based on the analysis of numerous results of numerical calculations using various normalization methods. Recall also that the feature correlation matrix is invariant under linear normalization (see Chap. 4, Property 4). As an example, Table 6.5 shows the feature pair correlation matrix for decision matrices D0 and D1.

126

6

Rank Reversal in MCDM Models: Contribution of the Normalization

Correlations for D1 (high-sensitivity matrix) are significantly lower on average (24%) than for D0 matrix (low sensitivity matrix). Detailed sensitivity analyses are provided in Chaps. 11–12.

6.5

Conclusions

1. Linear multivariate normalization methods reduce data either to one common scale (isotropic normalizations) or to a conditionally common scale (anisotropic normalizations). With multivariate normalization, in both cases there is the possibility of the priority of one or another feature that determines the result of aggregation of particular indicators. Thus, the problem is not solved in principle. 2. For linear normalization methods for each feature, the compression and shift coefficients are individual and are determined by the set of feature values of different alternatives, “measured” in the scale of this feature. As a result, it is possible to shift the domains of the normalized values of various features relative to each other and different data compression. When choosing a normalization method, strong bias and compression of the values of individual attributes should be excluded in order to avoid the priority of the contribution of individual features to the performance indicator of alternative. 3. Preference for normalizations is relative. The change in rank is determined to a large extent by the ratio of the values of the original decision matrix or is determined by the local priorities of alternatives for various attributes. 4. For a situation in which some of the attributes are “strong” and approximately the same part is “weak,” the performance indicators of such alternatives will differ slightly, and therefore the alternatives are hardly distinguishable. Such a task determines the high sensitivity of the rating to the choice of the normalization method.

References 1. Saaty, T. L., & Sagir, M. (2009). An essay on rank preservation and reversal. Mathematical and Computer Modelling, 49(5–6), 1230–1243. 2. Aires, R. F. F., & Ferreira, L. (2018). The rank reversal problem in multi-criteria decision making: A literature review. Pesquisa Operacional, 38(2), 331–362. 3. García-Cascales, M. S., & Lamata, M. T. (2012). On rank reversal and TOPSIS method. Mathematical and Computer Modelling, 56, 123–132. 4. Belton, V., & Gear, T. (1985). The legitimacy of rank reversal – a comment. Omega, 13, 143–144. 5. Wang, Y. M., & Luo, Y. (2009). On rank reversal in decision analysis. Mathematical and Computer Modelling, 49, 1221–1229. 6. Wang, X., & Triantaphyllou, E. (2008). Ranking irregularities when evaluating alternatives by using some ELECTRE methods. Omega, 36, 45–63.

References

127

7. Mareschal, B., De Smet, Y., & Nemery, P. (2008). Rank reversal in the PROMETHEE II method: Some new results. Proceedings of de IEEE 2008 International Conference on Industrial Engineering and Engineering Management, 959–963. 8. Stillwell, W. G., Seaver, D. A., & Edwards, W. (1981). A comparison of weight approximation techniques in multiattribute utility decision making. Organizational Behavior and Human Performance, 28, 62–77. 9. Solymosi, T., & Dombi, J. (1986). A method for determining the weights of criteria: The centralized weights. European Journal of Operational Research, 26, 35–41. 10. Ustinovičius, L. (2001). Determining integrated weights of attributes. Statyba, 7(4), 321–326. 11. Roberts, R., & Goodwin, P. (2002). Weight approximations in multi-attribute decision models. Journal Multi-Criteria Decision Analysis, 11, 291–303. 12. Shirland, L. E., Jesse, R. R., Thompson, R. L., & Iacovou, C. L. (2003). Determining attribute weights using mathematical programming. Omega, 31, 423–437. 13. Xu, X. (2004). A note on the subjective and objective integrated approach to determine attribute weights. European Journal of Operational Research, 156, 530–532. 14. Žižović, M., Miljković, B., & Marinković, D. (2020). Objective methods for determining criteria weight coefficients: A modification of the CRITIC method. Decision Making: Applications in Management and Engineering, 2(3), 149–161. 15. Wang, Y. M., & Luo, Y. (2010). Integration of correlations with standard deviations for determining attribute weights in multiple attribute decision making. Mathematical and Computer Modelling, 51, 1–12. 16. Mukhametzyanov, I. Z. (2021). Specific character of objective methods for determining weights of criteria in MCDM problems: Entropy, CRITIC, SD. Decision Making: Applications in Management and Engineering, 4(2), 76–105. 17. Hwang, C. L., & Yoon, K. (1981). Multiple attributes decision making: methods and applications. A state-of-the-art survey. 18. Pavličić, D. (2001). Normalization affects the results of MADM methods. Yugoslav Journal of Operations Research, 11(2), 251–265. 19. Liping, Y., Yuntao, P., & Yishan, W. (2009). Research on data normalization methods in multiattribute evaluation. Proceedings International Conference on Computational Intelligence and Software Engineering, Wuhan, China, 1–5. 20. Chatterjee, P., & Chakraborty, S. (2014). Investigating the effect of normalization norms in flexible manufacturing system selection using multi-criteria decision-making method. Journal of Engineering Science and Technology Review, 7(3), 141–150. 21. Jahan, A., & Edwards, K. L. (2015). A state-of-the-art survey on the influence of normalization techniques in ranking: Improving the materials selection process in engineering design. Materials & Design, 65, 335–342. 22. Peldschus, F. (2018). Recent findings from numerical analysis in multi-criteria decision making. Technological and Economic Development of Economy, 24(4), 1695–1717. https://doi.org/10. 3846/20294913.2017.1356761 23. Aytekin, A. (2021). Comparative analysis of normalization techniques in the context of MCDM problems. Decision Making: Applications in Management and Engineering, 4(2), 1–25. https:// doi.org/10.31181/dmame210402001a 24. Hubert, M., & Vandervieren, E. (2008). An adjusted boxplot for skewed distributions. Computational Statistics & Data Analysis, 52(12), 5186–5201.

Chapter 7

Coordination of Scales of Normalized Values: IZ-Method

Abstract Despite applying the same transformation, the values of each feature are transformed independently during multivariate normalization. In particular, for linear normalization methods for each feature, the compression and shift coefficients are individual and are determined by a set of feature values of different alternatives, “measured” in the scale of this feature. As a result, it is possible to shift the area of normalized values of various features relative to each other and different data compression. This entails the priority of the contribution of individual criteria in the performance indicator of alternatives and is one of the significant reasons for the variation in the ranking results depending on the normalization method. To eliminate differences in the contributions of individual attributes to the performance indicator of alternatives by shifting the domains of normalized values, it is proposed to transform the area of normalized values. This transformation was performed on the basis of the IZ-method proposed by the author. The IZ-method converts multidimensional data to a single common scale and represents a class of multivariate normalization methods that convert to isotropic scales. Keywords Multivariate normalization · Domains displacement · Elimination of displacement · Conditionally general scale · IZ-method · IZ transformation for non-linear aggregation methods

7.1

Ratio of Feature Scales

Normalization involves the subsequent transformation of dimensionless values, for example, feature aggregation in rank-based MCDM methods [1–4]. However, with multidimensional normalization, despite the fact that the data is dimensionless, each dimensionless feature still contains information of its own measurement scale. This is because the range of normalized values for each attribute depends on the scale of measurement and on the range of natural values. As a result, the contribution of the attributes of different criteria to the rating of the alternative will be different, and it is possible that “kilograms” will dominate your result, despite the normalization. Let’s

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3_7

129

130

7 Coordination of Scales of Normalized Values: IZ-Method

demonstrate this with a simple example for the case of three alternatives and two features with a decision matrix:

A=

10 5

90

2

100

80

:

ð7:1Þ

After Max normalization (rij = aij/ajmax), the decision matrix looks: R=

1 0:5 0:2

0:9 0:8 1

:

ð7:2Þ

The normalized values for the Max-method are interpreted as fractions of the attribute’s best value. In this case, the aggregation of such shares, for example, summation, seems to be adequate and correct. Is it real? Comparing the normalized values, we can conclude that the contribution of the second feature in a simple summation clearly exceeds the contribution of the first. Especially strong for the third alternative (the third row of the decision matrix). Thus, after normalization, the priority of the second criterion is observed (even before the assignment of weight coefficients). Despite the possible negative consequences of the above example, the same interpretation of the normalized values (see Sect. 4.5) is better than no interpretation. If the area of normalized values for all attributes is the same, then what do we summarize? Probably, these are fractional parts of the sign! From the standpoint of the independence of attributes and the possibility of transforming normalized values, it is possible to apply individual normalizations for individual features. For example, for one of the features, use the Sum normalization method, and for the other feature, use the Vec method. One reason why the same normalization method is applied to different attributes is to interpret the normalized values of different attributes in the same way so that values of the same order of magnitude are subsequently aggregated and fractions of different values are not added together. The same interpretation of normalized values, in particular, is a limiting factor, why different normalization methods are not applied to different attributes (given their independence). For example, why shouldn’t you apply Max normalization to one attribute and Sum normalization to another attribute if the attributes are independent? Answer: because in this case it becomes impossible to compare or aggregate values that are different in meaning, despite the fact that they are dimensionless. For example, the fraction of the best value for the Max-method is not equal to the fraction of the sum (intensity) of the normalization method Sum. The main methods of normalization have a quite definite agreement with the geometry of the space of values or a multidimensional cloud of initial data. However, the measurement scales and the geometry of the value space do not agree in any way.

7.2

The Domains Displacement of Normalized Values of Various Attributes

131

Obviously, compression-stretching and shifting of the space of individual dimensions are not prohibited, since the attributes are independent. Note also that all linear normalization methods are linear transformations of each other. Therefore, the transformation of normalized values is possible, but in this case, it is necessary to justify the transformations and harmonize the normalized values of various attributes with each other in order to avoid unpredictable results and consequences. For multivariate normalization procedures, it is impossible to simultaneously adjust the share of a feature of an individual attribute (compression and shift of normalized values) and the correspondence of different scales of different features. The problem is not solved in principle and only a compromise solution is possible. Only one of the Max-Min multivariate normalization methods converts all features into the interval [0, 1], and the normalized values of all attributes are interpreted in the same way as fractions of the range of natural values. However, it is not entirely clear why features with different ranges of natural values will be identical after Max-Min normalization: 10 500 9 300 8

Max‐Min



1 1 0:5 0:5 0

100

:

ð7:3Þ

0

With regard to example (7.1), matrix (7.2) reduced to the range [0.2, 1] has the form:

V=

1 0:6 0:5 0:2

:

ð7:4Þ

0:2 1 Below we will present a technique for reducing to a common range (a common measurement scale) using a linear transformation of the normalized values of individual attributes.

7.2

The Domains Displacement of Normalized Values of Various Attributes

For linear normalization methods for each feature, the coefficients of compression and shift are individual and are determined by the set of values of the feature of various alternatives, “measured” in the scale of this feature [5–10]. Compressionstretching and shift of individual features during normalization lead to deformation of the multidimensional cloud of initial data. Let us turn again to illustrate the relative position of the areas of normalized values of various features. Figure 7.1

132

7

Coordination of Scales of Normalized Values: IZ-Method

Fig. 7.1 Normalized values and relative position of domains of five different attributes relative to each other for basic linear normalization methods

(see also Chap. 3) presents the normalized values and relative position of the domains of five different attributes relative to each other for the seven basic linear normalization methods. Jumps in the relative location of the domains of the normalized values of various attributes for different normalization methods according to Fig. 7.1 are obvious. We observe, for example, a strong difference in the relative position of the domain 4 and 5 attributes for the Max, dSum, and Vec methods. For the Max normalization method, the “upper” values of all attributes are the same and equal to 1. The “lower” values for various attributes differ in some cases by more than two times, for example, for the second and fifth attributes. A similar strong bias and a significant difference in the range of domains of various attributes also take place for the Sum and Vec normalization methods. For the dSum normalization method, the “upper” values of all attributes are the same and equal to 1, as well as for the Max normalization method. However, the difference in “lower” values for various attributes is not as significant as for Max. In this context, dSum is more efficient than the Max-method (at least in this example). The scope and position of the domain of the fifth attribute is indicative, in contrast to the normalization methods Max, Sum, and Vec. If for the specified methods the range of the domain of the fifth attribute is the largest and differs significantly, then for the normalization method dSum the range of the domain of the fifth attribute is comparable (and even less) to the range of the domains of other attributes. A shift in the domains of normalized feature values also occurs for the centered Z-score and mIQR methods. Only for the Max-Min normalization method, the domains of normalized values of various features are not shifted. However, as shown in example (7.3) of the previous section, for Max-Min there is a problem of matching the range of different features. If for different criteria the range of normalized values is shifted relative to each other, the contribution of such criteria to the performance indicator of alternatives according to the equation will be different. This causes one or more criteria to take

7.3

Attribute Equalizer

133

precedence over the rest before assigning criteria weights. Elimination of the bias is possible by consistent transformation of the normalized values.

7.3

Attribute Equalizer

It is easy to see (see, for example, Fig. 7.1) that multidimensional normalization can be interpreted as a device or a computer program that allows you to selectively correct the signal amplitude depending on the frequency characteristics—an equalizer. Indeed, each of the normalization methods generates a certain range of signal (values) for each feature. The equalizer is a powerful tool for obtaining a variety of “timbres.” However, MCDM researchers have a limited number of normalization methods (linear and non-linear) at their disposal. Let’s set the task of implementing an equalizer for multidimensional normalization, allowing to selectively correct the “amplitude-frequency” characteristics of the feature in accordance with the choice of the decision maker (musician). What is the choice of the decision maker. It is necessary to correct the normalized values in such a way that the basic principles of normalization [6, 11] (see also Sect. 3.1] are fulfilled: Principle 1: The relative gap between data for the same indicator should remain constant, Principle 2: The relative gap between different indicators should remain constant. Principle 3: The maximum values after normalization should be equal.

7.3.1

Transformation of Normalized Values Using Fixed Point Technique

Bringing attribute values (natural or normalized) by shifting to a new coordinate system with a zero initial value allows you to use the reference point when scaling as a fixed point: uij = aij - amin j :

ð7:5Þ

vij = r ij - r min j :

ð7:6Þ

This procedure makes it easy to subsequently set the necessary proportions between the scales of various attributes. If the normalized values are rij2[0, 1], then vij2[0, 1] and vjmin = 0. Scaling of shifted normalized values: uij = vij∙kj allows you to save the relative gap between the data (preservation of dispositions, see Property 1, Sect. 4.3). By

134

7 Coordination of Scales of Normalized Values: IZ-Method

choosing a scaling factor, you can achieve any value of the domain span, less than 1: 0 < rng(uj) ≤ 1. Obviously, kj ≥ 1/vjmax. By choosing a domain shift by dj: 0 < dj ≤ 1–ujmax, you can achieve a given location of the domain in [0, 1]. In particular, the Max-Min normalization method with a scaling factor of 1/(ajmax - ajmin) has such an algorithm. In accordance with the calculation formula, first the values are shifted to a fixed point, and then the scaling equalizes the range of normalized values of all attributes. Variation kj, dj for each feature is the implementation of the equalizer. For practical tasks, it is necessary to implement an equalizer that matches the measurement scales of various features.

7.4

Elimination of Displacement in the Domains of Normalized Values: IZ-Method

The problem of shifting the area of normalized values of various attributes is solved by the IZ transformation method [12, 13]. The main idea of IZ transformation of normalized values is to align domains for various normalization methods. On the one hand, the use of a well-defined normalization method for a specific problem may be due to a different meaningful interpretation of the normalized values representing different proportions (values). On the other hand, different normalized value domains may have offsets for different attributes. Thus, IZ transformation solves both problems. To eliminate the displacement, it is necessary to equalize the “upper” and “lower” levels of the normalized values of the attributes of alternatives for all criteria. To harmonize measurement scales, convert normalized values to one conditionally common scale for all attributes. Assume (Step 1) that the decision matrix is normalized using one of the normalization methods (linear or non-linear) rij = Norm(aij). If necessary, we perform the inversion of cost attributes into benefit attributes using the ReS-algorithm [14] (see also Chap. 5). Let us assume that the range of values of the new scale is defined and represents a fixed interval [I, Z]⊂[0, 1]. The transformation of the normalized values is performed using the fixed point technique. To do this, we will shift the domain to a new coordinate system with a zero initial value: rij - r min j :

ð7:7Þ

Next, we perform a linear transformation of the normalized values of all attributes into the interval [I, Z] using stretch-compression and shear operations:

7.4

Elimination of Displacement in the Domains of Normalized Values: IZ-Method

135

Fig. 7.2 Step-by-step IZ transformation of normalized values. Decision matrix D0

uij =

r ij - r min j r max - r min j j

 ðZ - I Þ þ I, 8i = 1, m; 8j = 1, n:

ð7:8Þ

As a result, we obtain new normalized values uij 2 [I, Z]⊂[0, 1]. A step-by-step illustration of the IZ transformation of normalized values is shown in Fig. 7.2. In Fig. 7.2 for normalization, the Max-method was used: rij = Max(aij), the range of values of the new scale is defined as I = min(rjmin), Z = max(rjmax) = 1. The second and fifth criteria are cost criteria and require value inversion, which is performed using the ReS-algorithm. Thus, the IZ transformation procedure allows you to set the necessary proportions between the scales of various attributes, perform scaling, and equalize the range of normalized values. If any of the linear methods is used for normalization, then an equivalent implementation of the IZ transformation can be performed on natural attribute values using Max-Min normalization. The calculation formula (7.8) for the benefit criteria and cost criteria is as follows: uij = uij =

aij - amin j amax - amin j j - aij amax j amax - amin j j

 ðZ - I Þ þ I, 8i = 1, m; 8j 2 Cþ j :

ð7:9Þ

 ðZ - I Þ þ I, 8i = 1, m; 8j 2 C j- :

ð7:10Þ

136

7

Coordination of Scales of Normalized Values: IZ-Method

Given that the dispositions of normalized values for all linear normalization methods according to property P.1 are the same, the result of the transformation when using any linear normalization method is identical. IZ normalizations are “isotropic”—the coverage area of a multidimensional cloud of normalized values is an m-dimensional cube. The IZ-method converts multidimensional data to a single scale and represents a class of multidimensional normalization methods that convert to isotropic scales. The IZ transform is a linear method with a bias (uij = krij+b) and all the invariant properties of linear transformations P.1–P.4 presented in Sect. 4.3 are satisfied for it. In particular, the invariant property P.1 is important, according to which the IZ transform preserves the dispositions of natural values. Choice of transformation interval [I, Z], according to the invariant property P.2 does not affect the ranking result if one of the linear methods is used for normalization, and a linear (SAW, . . .) or homogeneous function (TOPSIS,. . .) is used as an aggregation function. Under such conditions, the results of ranking using the IZ-method for all options for choosing the scale [I, Z], including [0, 1] (Max-Min method) are the same. In the case of using non-linear aggregation procedures of the attributes of alternatives (for example, WPM, COPRAS) or non-linear normalization, for example, log(a), the choice of the interval [I, Z] affects the ranking result. Such an influence is demonstrated by the examples presented in Chap. 9. The IZ-method provides for the choice of a conditionally common normalization scale for non-linear aggregation methods [I, Z], which has the same interpretation of the normalized values as the main linear normalization methods. Thus, the IZ-method is a method for transforming normalized values that allows you to align the boundaries of the domains of various attributes and thereby eliminate the priority of individual attributes during aggregation.

7.5

Choice of Conditionally General Scale [I, Z] Normalized Values

The key feature of the IZ-method is the choice of a common scale of normalized values that is consistent for all attributes and has the same interpretation. This allows aggregating normalized values of the same order, rather than fractions of different values. In choosing a common area for all attributes [I, Z] normalized values, there is an uncertainty due to the possibility of choosing different scales. Uncertainty is also due to the shift of the domains of various attributes relative to each other and the different magnitude of their scope in the chosen scale (Fig. 7.1). A meaningful interpretation of the normalized values for the main linear normalization methods was presented above in Sect. 4.5. In accordance with this, IZ normalization will have the same interpretation if the values of the boundaries of

7.5

Choice of Conditionally General Scale [I, Z] Normalized Values

137

the region [I, Z] should be the same as for the range of normalized values of the corresponding linear normalization method. The IZ normalization values will correspond to proportion of the attribute of the ith alternative relative to the largest attribute value (Max normalization method), if I and Z are characteristic values of the change area boundaries rij = Max(aij). A similar approach is used for other linear normalization methods rij = Sum(aij), rij = Vec(aij), etc. In this case, the values of the IZ transformation will be interpreted, respectively, as the intensity of the feature of the ith alternative and as the share of the feature relative to the diameter of the m-dimensional rectangle constructed from the feature values of all alternatives, and so on. In accordance with this, different IZ normalizations will be denoted as successive normalizations Max-IZ, Sum-IZ, Vec-IZ, etc. When choosing a common area for all attributes [I, Z] of normalized values, uncertainty arises due to the displacement of the domains of various features relative to each other and the different values of their range in the selected scale (Fig. 7.1). The lower (similarly, upper) limit of normalized values for n attributes can be different: min min = r min 1 , r2 , . . . , rn

max max = rmax 1 , r2 , . . . , rn

min r i1 , min r i2 , . . . , min r in : i

i

i

max r i1 , max r i2 , . . . , max r in : i

i

i

ð7:11Þ ð7:12Þ

Therefore, the choice of the region of transformation [I, Z] is ambiguous. Limit values are defined, respectively, as: min I min = min r min j , I max = max r j :

ð7:13Þ

Z min = min r max , Z max = max r max : j j

ð7:14Þ

j

j

j

j

In order to reduce the potential influence of one attribute, averaging is used in various ways: harmonic mean (HM), geometric mean (GM), arithmetic mean (AM), quadratic mean (QM, root mean square, RMS), or median value: n

HM = n= j=1

xj j=1

ð7:15Þ

1=n

n

GM =

1 , xj :

ð7:16Þ

138

7

Coordination of Scales of Normalized Values: IZ-Method

AM =

QM =

1 n

n

1 n

xj :

ð7:17Þ

j=1 1=2

n

xj 2

:

ð7:18Þ

j=1

For different means, the following inequalities hold: min ≤ HM ≤ GM ≤ AM ≤ QM ≤ max:

ð7:19Þ

The following rational options are proposed for use: 1. As I, take the smallest (worst) value of the lower level of alternatives for all criteria, and as the value of Z, take the largest (best) value of the upper level max : I 1 = r min = min r min j , Z 1 = max j r j j

ð7:20Þ

In this case, the ranking will be carried out taking into account the influence of “strong alternatives” as much as possible. This is due to the fact that the range of values of alternatives for all criteria increases and the values of the “lower” level of alternatives become more distant from the values of the “upper” level of alternatives for each of the criteria. 2. As I, take the largest (best) value of the lower level of alternatives for all criteria, and as the value of Z, take the largest (best) value of the upper level max , I 2 < Z2: I 2 = r min = max r min j , Z 2 = max j r j j

ð7:21Þ

In this case, the ranking will be carried out taking into account the influence of “weak alternatives” as much as possible. This is due to the fact that the range of values of alternatives for all criteria decreases and the values of the “lower” level of alternatives become close to the values of the “upper” level of alternatives for each of the criteria. 3. As “I”, take the average value of the lower level of alternatives for all criteria, and as the Z value, take the average value of the upper level max , I3 < Z3: I 3 = r min = mean rmin j , Z 3 = mean r j

In this case, the scales agree within the standard deviation for the mean.

ð7:22Þ

7.6

Invariant Properties of the IZ-Method

139

Fig. 7.3 An illustration of data normalization using the IZ-method for various choices of fixed domain boundaries. (1): I = min(min(V )), (2): I = max(min(V )), Z = max(max(V )). Input data: decision matrix D0 [8×5]

4. As “I”, take the median value of the lower level of alternatives for all criteria, and as the value of Z, take the median value of the upper level max I 4 = r min = median r min , I4 < Z4: j , Z 4 = median r j

ð7:23Þ

In this case, the scales are consistent within the standard deviation of the median. The following options are justified as well: [I1, minj rjmax] и [I2, minj rjmax]. It is also possible to choose a choice, determined by the context of the decision-making problem, in which the interval [I, Z] is determined by the expert: 0 ≤ I5 ≤ Z5 ≤ 1. If as [I, Z] accept [0, 1], then IZ normalizations are similar to the result of the Max-Min transformation. An illustration of data normalization using the IZ-method for various choices [I, Z] is shown in Fig. 7.3. The IZ-method converts multidimensional data into a single common scale and is a class of multidimensional normalization methods that convert data to isotropic scales. Various options for choosing the boundaries of the domain [I, Z] determine the choice of a different conditionally common scale for all signs and a different interpretation of the normalized values.

7.6

Invariant Properties of the IZ-Method

The IZ-method is a linear data transformation. Therefore, for the IZ-method, all properties are satisfied during linear transformations (Sect. 4.3):

140

7

Coordination of Scales of Normalized Values: IZ-Method

Property 1. The disposition of values is invariant under a linear transformation. Property 2. Linear transformation of all scales uij = k∙rij + b with fixed coefficients k and b does not change the ranking if a linear function (e.g., SAW) is used to aggregate attributes. Property 3. For a homogeneous aggregation function (for example, TOPSIS, GRA), the performance indicators of alternatives are invariant under a linear transformation with fixed coefficients (uij = k∙rij + b). Consequence: The result of ranking alternatives when using the IZ-method is the same for any choice of the scale [I, Z] and the Norm() normalization method at the first step of the IZ-method, if a linear or homogeneous function (SAW, TOPSIS, GRA) is used for aggregation and is identical to the result, as in the case of using the Max-Min normalization method. Figure 7.3 additionally presents the results of aggregation of normalized attribute values (the numbers of alternatives of I–III ranks) using the SAW and TOPSIS aggregation methods (L2-metric). The example demonstrates the invariance of ranking during the linear transformation of isotropic scales, if a linear or homogeneous function is used as an aggregation function. In the case of a non-linear aggregation function, the choice of domain boundaries [I, Z] affects the rating of alternatives. Relevant examples are presented below in Sect. 7.8. Also, some feature aggregation methods, such as COPRAS, WPM, WASPAS do not handle null values, as in the case of the Max-Min method.

7.7

Generalization of the IZ-Method

The generalization of the IZ-method is achieved for the case of expert assignment of domain boundaries for each attribute I = [I1, I2, . . ., In], Z = [Z1, Z2, . . ., Zn], Ij < Zj : uij = vij 

Zj - Ij max vj - vmin j

þ I j , 8i = 1, m; 8j = 1, n:

ð7:24Þ

Such a generalization allows an expert to regulate the mutual arrangement of domains. This is similar to controlling selection by setting weights, but differs in the effect it has on the domain structure. An example of normalization with the assignment of domains for each of the attributes is shown in Fig. 7.4a, c.

7.7

Generalization of the IZ-Method

141

Fig. 7.4 Generalization of the IZ-method of normalization for the case of variable boundaries of domains of various attributes (a), (c). Input data: decision matrix D0 [8×5]

As an example of choice control, consider the following model. Let the weights for five criteria differ sequentially from each other by 5%. In the list sorted in ascending order, we get the following values of weights (including rounding): w1 = [0.181 0.190 0.200 0.210 0.220]. Let us also construct an ordered sequence of domains of the same length, also shifted relative to each other by 5%, for example, along the upper level. We get the following values of the Z vector (taking into account rounding): Z = [0.823 0.864 0.907 0.952 1.000]. Let the length of all domains be the same and equal to 0.5 units. Then the vector I = [0.323 0.364 0.407 0.452 0.500]. Let’s perform IZ normalization of the decision matrix D0 according to Table 3.1 for the first case with fixed for all attributes values of the boundaries of domains [I, Z] = [0.5, 1], and in the second case with the variables [Ij, Zj]. The illustration is presented in Fig. 7.4b and c, respectively. The lower part of the figure shows the values of the convolution of normalized values with weights w1, which determine the criteria priority for the first case, and with fixed weights (no criteria priority) wj = 0.2 for the second case. The results of values normalized with weight and the ranking results in both variants are identical. Generalization of the IZ-method for individual scales [Ij, Zj] for each attribute makes it possible to attribute the IZ-method to the class of multidimensional normalization methods that convert data to anisotropic scales.

142

7.8

7

Coordination of Scales of Normalized Values: IZ-Method

IZ Transformation for Non-linear Aggregation Methods: Example for COPRAS, WPM, and WASPAS Methods

This section presents the results of ranking the alternatives for the decision matrix Dq, which is highly sensitive to the parameters of the MCDM model—the normalization method and the aggregation method. The MCDM rank model included four normalization methods: Max, Sum, Vec, dSum, and three non-linear aggregation methods: COPRAS, WPM, WASPAS. The results are needed to analyze the impact of IZ transformations on ranking. Conditionally general scale [I, Z] for each normalization method is chosen as the average value according to option (3) described above in Sect. 7.5. Therefore, there are four different variations of the IZ-method: IZ-Max(3), IZ-Sum(3), IZ-Vec(3), IZ-dSum(3), which will be used in conjunction with three non-linear aggregation methods. The decision matrices Dq were generated in accordance with the methodology described in Sect. 6.4 above. The weights of attributes during aggregation are assumed to be the same. COPRAS Method of Aggregation: The algorithm of the COPRAS method has a feature in the form of division by the amount of cost attributes. If one of the alternatives has all the attributes according to the cost criteria at the lower level, then when using the Max-Min normalization method, it is necessary to exclude division by zero. In this situation, choosing a non-zero value of the “lower” level “I” of alternatives when using the IZ-method easily solves the problem. In accordance with the indicated technique, a decision matrix D1, sensitive to the change in the rating from normalization, was obtained:

D1 = aij =

5431:8 5697:2 6366:0 5894:8 5888:5 5112:9 6010:9 4507:2

75:5 83:0 76:2 80:6 76:6 71:5 73:5 76:8

478:9 580:7 525:0 543:2 582:8 501:6 483:3 635:8

171:6 166:0 176:7 150:4 172:1 141:0 173:6 178:0

1196:7 1864:5 1605:7 1533:5 2612:5 2265:9 1368:0 2651:2

:

ð7:25Þ

The results of normalization and ranking by the COPRAS method are shown in Fig. 7.5. For four different variations of the IZ-method: IZ-Max(3), IZ-Sum(3), IZ-Vec(3), IZ-dSum(3), the alternatives of 1st rank have the following numbers: 7, 1, 1, 3, respectively. The label (3) in the method name indicates the choice of the boundaries of the conventionally general scale [I, Z] as an average value. The differences in ranking are primarily due to the high sensitivity of the decision matrix. This sensitivity is determined by the low values of the relative efficiency

7.8 IZ Transformation for Non-linear Aggregation Methods: Example for COPRAS,. . .

143

Fig. 7.5 Rank reversal for a different choice of the region of transformation [I, Z]. COPRAS method, decision matrix D1 by Eq. (7.25) Table 7.1 Ranks of alternatives and relative ranking gap Norm-method Max Sum Vec dSum IZ-Max(3) IZ-Sum(3) IZ-Vec(3) IZ-dSum(3)

I-Rank 1 1 1 3 7 1 1 3

dQ1, % 3.7 5.7 5.4 5.0 0.0 0.1 0.1 0.1

II-Rank 7 7 7 7 1 7 7 7

dQ2, % 11.0 13.0 12.7 4.8 0.0 0.3 0.2 0.2

III-Rank 3 3 3 1 3 3 3 1

dQ3, % 19.5 15.5 16.0 17.4 25.0 24.9 24.9 24.7

index of alternatives dQp of I–III ranks, the values of which are presented in Table 7.1. The dQp is the relative (given in the Q scale) gain or loss of the performance score for an ordered list of alternatives. According to the table, the relative rating gap is less than 1% for alternatives of I–III ranks. Analysis of the results does not reveal the priority of any normalization method. This indicates that the result of ranking alternatives, in addition to the normalization method, is significantly influenced by an additional factor. Such a factor is the local priorities of alternatives for various attributes, determined by the initial values of the decision matrix. The correct outcome of such an analysis is that alternatives A1, A3 and A7 are recommended to the decision maker for decision-making. Similar results (Figs. 7.6 and 7.7; Tables 7.2 and 7.3) were also obtained for WPM aggregation methods with decision matrix D2 by Eq. (7.26) and WASPAS with decision matrix D3 by Eq. (7.28).

144

7

Coordination of Scales of Normalized Values: IZ-Method

Fig. 7.6 Rank reversal for a different choice of the region of transformation [I, Z]. WPM aggregation method, decision matrix D2 by Eq. (7.26)

Fig. 7.7 Rank reversal for different selection of the region of transformation [I, Z]. WASPAS method, decision matrix D3 by Eq. (7.28)

WPM Method of Aggregation: The algorithm of the WPM method excludes the processing of zero and negative feature values. Therefore, the Max-Min normalization method cannot be integrated into the model structure in conjunction with WPM. In accordance with the technique described above, a decision matrix D2 that is sensitive to the change in rating from normalization is obtained:

7.8

IZ Transformation for Non-linear Aggregation Methods: Example for COPRAS,. . .

145

Table 7.2 Ranks of alternatives and relative rating gap. WPM method Norm-method Max Sum Vec dSum IZ-Max(3) IZ-Sum(3) IZ-Vec(3) IZ-dSum(3)

I-Rank 1 1 1 6 1 8 6 1

dQ1, % 11.8 11.8 11.8 46.8 0.5 0.0 0.0 1.9

II-Rank 5 5 5 8 6 6 8 6

dQ2, % 4.0 4.0 4.0 3.9 0.2 0.2 0.1 0.5

III-Rank 8 8 8 1 8 1 1 8

dQ3, % 5.9 5.9 5.9 20.8 36.8 36.6 36.8 36.8

Table 7.3 Ranks of alternatives and relative rating gap. WASPAS method Norm-method Max Sum Vec dSum IZ-Max(3) IZ-Sum(3) IZ-Vec(3) IZ-dSum(3)

dQ1, % 3.7 3.7 3.7 1.1 0.0 0.1 0.1 0.0

I-Rank 7 7 7 7 4 3 3 7

D2 = aij =

6469:7 5090:7 4827:1 5753:1 6074:2 5298:0 4516:0 4695:7

II-Rank 4 4 4 4 7 4 4 4

dQ2, % 32.0 33.7 33.5 2.6 0.0 0.0 0.0 0.1

80:7 77:7 76:5 71:5 74:0 82:7 80:3 74:6

148:2 165:9 151:8 167:6 154:8 159:0 157:5 178:0

641:4 544:1 564:1 563:5 622:8 475:4 581:5 575:1

III-Rank 3 3 3 3 3 7 7 3

1179:0 2385:2 1407:3 1565:5 1194:9 2434:5 1669:4 1214:2

:

dQ3, % 15.5 11.9 12.3 58.2 53.4 53.2 53.3 53.3

ð7:26Þ

The results of normalization and ranking by the WPM method are shown in Fig. 7.7. When using non-displacement normalization methods (Max, Sum, Vec) in conjunction with the WPM aggregation method, the performance score of the alternatives is scaled by the same factor. Therefore, the ranking of alternatives for these normalization methods is the same. n

Qi uij =

k  r ij j=1

wj

n

=

n

kwj  r ij wj = j=1

n

k wj  j=1

n

r ij wj = k  j=1

r ij wj j=1

= k  Qi r ij : The relative indicator dQ does not change either (see Table 7.2).

ð7:27Þ

146

7

Coordination of Scales of Normalized Values: IZ-Method

WASPAS Method of Aggregation: The algorithm of the WASPAS method is formed by a combination of WSM and WPM, and, like WPM, excludes the processing of zero and negative feature values. Therefore, the Max-Min normalization method cannot be integrated into the model structure. In accordance with the technique described above, a decision matrix D3 that is sensitive to the change in rating from normalization is obtained:

D3 = aij =

5378:8 4582:9 5743:4 6171:9 5290:2 5156:1 6145:5 4375:4

76:6 71:6 80:5 75:6 80:6 74:0 74:4 77:7

628:0 610:7 542:7 486:5 628:1 604:5 468:0 588:6

155:6 175:1 164:9 143:2 136:4 154:8 143:3 174:6

1487:6 2199:1 1856:6 1175:8 1552:1 1715:5 1125:0 2387:9

:

ð7:28Þ

The results of normalization and ranking by the WASPAS method are shown in Fig. 7.7. Since WASPAS is formed by a linear combination of WSM and WPM methods, when using normalization methods without displacement (Max, Sum, Vec) in conjunction with the WASPAS aggregation method, the performance indicator of alternatives is scaled with the same factor. Therefore, the ranking of alternatives for these normalization methods is the same. The rating results based on the class of anisotropic normalizations differ significantly from the rating results for the class of isotropic normalizations. The first three normalization methods are linear and represent a class of anisotropic normalizations or conventionally identical scales. The ranking results for them are the same. This allows us to conclude only that the decision matrix has little sensitivity to the choice of the normalization method from this class. The rating of alternatives based on the class of isotropic normalizations changes depending on the choice of the interval [I, Z] according to the choice of the first step normalization method. This allows us to conclude only that the decision matrix has a high sensitivity to the choice of the normalization method from this class. Distinguishability of the rating of alternatives is weak. In such situations, it is recommended that the decision maker chooses both alternatives. We also note the high sensitivity of the ranking to the estimates of the decision matrix. The values of the decision matrix attributes in Eqs. (7.25, 7.26, 7.28) are given with an accuracy of tenths. When rounded to an integer value, the ranking result will be slightly different. Comparison of the numerical results of the relative characteristics of the performance indicator of alternatives dQ for the considered pairs of “aggregation-normalization” does not allow us to find the priority of any method of normalization. This indicates that the result of ranking alternatives, in addition to the normalization method, is significantly influenced by an additional factor. According to the author, such a factor may be the local priority of alternatives for various attributes (ratios of

7.9

Conclusions

147

priorities of different alternatives for various attributes), determined by the initial values of the decision matrix. A discussion of this problem and the generation of high-sensitivity problems are presented in Chap. 6 above. Thus, in the case of using non-linear procedures for aggregating the attributes of alternatives, the choice of the interval [I, Z] affects the ranking result. The IZ-method provides for non-linear aggregation methods the choice of a conditionally general normalization scale [I, Z], which has the same interpretation of the normalized values as the main linear normalization methods.

7.9

Conclusions

The following conclusions constitute an understanding of the essence of the applied methods of multivariate normalization: 1. the IZ-method converts multidimensional data into a single common scale and is a class of multivariate normalization methods that convert data to isotropic scales and eliminate domain bias while maintaining the dispositions of natural feature values. The IZ-method allows you to bind to the scale of any feature or to the scale chosen by the decision maker 2. the choice of a conditionally general scale does not affect the rating if the feature aggregation method is linear or if the feature aggregation method uses a homogeneous function. 3. when using a non-linear feature aggregation method (for example, WPM, WASPAS, COPRAS), the choice of a conditionally common scale affects the ranking of alternatives, 4. linear methods of multidimensional normalization bring data either to one conditionally common scale: isotropic normalizations Max-Min, IZ-Norm, or to different scales: anisotropic normalizations Max, Sum, Vec, dSum, Z-score. With multivariate normalization, in both cases there is the possibility of the priority of one or another feature that determines the result of aggregation of particular indicators. Thus, the problem is not solvable in principle, 5. one of the significant reasons for the variation in the ranking during anisotropic normalization is the shift in the areas of normalized values of various features relative to each other, 6. the degree of differentiation of the final rating of alternatives is determined primarily by the local priorities of the alternatives and weakly depends on the choice of the normalization method. If the degree of differentiation is low, then the alternatives should be considered indistinguishable. Consequences: • the Sum and Vec normalization methods should not be used for multivariate normalization, or used only after additional bias analysis, because these methods have potentially large displacements of different feature domains relative to each

148

7

Coordination of Scales of Normalized Values: IZ-Method

other. The Sum and Vec methods are good one-dimensional (vector) normalization methods that have an interpretation of the contribution intensity and projective angles, • the Max and dSum normalization methods equalize the maximum values of all features (=1). For these methods, only the lower boundary of the domains is shifted. As a result, when choosing the best solution (when maximizing the integral indicator), the shift of the lower levels has little effect on the result for the 1-rank alternative. However, as shown in the examples of Chap. 5, in the case of competition of alternatives, the rank inversion is also possible due to the displacement of the lower boundary of the regions • the Max-Min normalization method has no bias in the areas of normalized values of various features (isotropic normalization), but the presence of zero values does not allow using it for some feature aggregation methods (WPM, WASPAS, COPRAS). A critical analysis of multivariate normalization methods allows us to conclude that in the absence of criteria, the preference for certain normalizations is relative. In such a situation, it is advisable to be guided by two main principles of normalizing multidimensional data: 1. Saving natural value dispositions of alternatives attributes for each criterion 2. Ensure equal contributions of different criteria to the performance of alternatives The effectiveness of the IZ normalization method is postulated by a combination of these positive properties that determine the basic principles (necessary properties) of multidimensional data normalization. Since the IZ-method uses an independent transformation of the normalized values of various features, the limitation of the application is the dependence of the criteria. The prospects of the IZ-method of normalization require further research. First of all, it is necessary to determine the dependence of the results on the choice of the boundaries of the region [I, Z], which is equivalent to binding to the scale of any feature or to the scale chosen by the decision maker.

References 1. Hwang, C. L., & Yoon, K. (1981). Multiple attributes decision making: Methods and applications. A state-of-the-art survey. 2. Triantaphyllou, E. (2000). Multi-criteria decision making methods: A comparative study. Springer US. 3. Tzeng, G.-H., & Huang, J. J. (2011). Multiple attribute decision making: Methods and application. Chapman and Hall/CRC. 4. Figueira, J., Greco, S., & Ehrogott, M. (2005). Multiple criteria decision analysis: State of the art surveys. Springer. 5. Chatterjee, P., & Chakraborty, S. (2014). Investigating the effect of normalization norms in flexible manufacturing system selection using multi-criteria decision-making method. Journal of Engineering Science and Technology Review, 7(3), 141–150.

References

149

6. Çelen, A. (2014). Comparative analysis of normalization procedures in TOPSIS method: With an application to Turkish Deposit Banking Market. Informatica, 24(2), 185–208. 7. Vafaei, N., Ribeiro, R. A., & Camarinha-Matos, L. M. (2018). Data normalization techniques in decision making: Case study with TOPSIS method. International Journal of Information and Decision Sciences, 10(1), 19–38. 8. Jahan, A., & Edwards, K. L. (2015). A state-of-the-art survey on the influence of normalization techniques in ranking: Improving the materials selection process in engineering design. Materials & Design, 65, 335–342. 9. Aytekin, A. (2021). Comparative analysis of normalization techniques in the context of MCDM problems. Decision Making: Applications in Management and Engineering, 4(2), 1–25. https:// doi.org/10.31181/dmame210402001a 10. Zeng, Q.-L., Li, D.-D., & Yang, Y.-B. (2013). VIKOR method with enhanced accuracy for multiple criteria decision making in healthcare management. Journal of Medical Systems, 37, 1–9. 11. Liping, Y., Yuntao, P., & Yishan, W. (2009). Research on data normalization methods in multiattribute evaluation (pp. 1–5). Proceeding International Conference on Computational Intelligence and Software Engineering, Wuhan, China. 12. Mukhametzyanov, I. Z. (2023). Elimination of the domain’s displacement of the normalized values in MCDM tasks: The IZ-method. International Journal of Information Technology and Decision Making. https://doi.org/10.1142/S0219622023500037 13. Mukhametzyanov, I. Z. (2023). On the conformity of scales of multidimensional normalization: An application for the problems of decision making. Decision Making: Applications in Management and Engineering. https://doi.org/10.31181/dmame05012023i 14. Mukhametzyanov, I. Z. (2020). ReS-algorithm for converting normalized values of cost criteria into benefit criteria in MCDM tasks. International Journal of Information Technology and Decision Making, 19(5), 1389–1423. https://doi.org/10.1142/S0219622020500327

Chapter 8

MS-Transformation of Z-Score

Abstract The attraction of Z-standardization for solving MCDM problems is that in this case, the domains of normalized values are aligned on average and the interpretation of the scales of normalized values is the same. The numerical values of all attributes are measured in the standard deviation scale of each feature. This has the advantage that such normalized values differ only in properties other than variability, facilitating, for example, shape comparisons. MS-transformations of standardized values to the set [0, 1] are proposed, for which the mean values and variances are the same for all attributes, and the domains of normalized values are averaged out. Additionally, the choice of the measurement scale is set in accordance with the selected normalization method. The exclusion of negative Z-scores allows you to expand the list of methods in decision-making for the transformed data. For example, it becomes valid to use WPM, WASPAS methods. The MS-method is relevant for non-linear aggregation methods and provides the choice of a conditionally general normalization scale that has the same interpretation of the normalized values as the main linear normalization methods. MS-transformation is applicable to data transformation of any centered value, and such an implementation is made for the mIQR normalization method, which expands the list of normalization methods with an adequate interpretation of normalized scales. Keywords Multivariate normalization · Data centered · MS-transformations of centered data · MS-transformations for nonlinear aggregation methods

8.1

Standardized Scoring

A standardized score (Z-score) is a measure of the relative spread of an observed or measured value, which shows how many standard deviations make up its relative mean spread [1]. This is a very convenient indicator used to compare values of different dimensions or measurement scales. Let D = (aij) [m×n] be the solution matrix of the MCDM problem. Standard estimates in the multivariate case are calculated by the formula:

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3_8

151

152

8

r ij =

MS-Transformation of Z-Score

aij - aj , sj

ð8:1Þ

where m

aj =

1  a , m i = 1 ij

m

1 sj = ð a - aj Þ2  m i = 1 ij

ð8:2Þ 0:5

,

ð8:3Þ

respectively, the median and interquartile range of values of the jth attribute. As applied to decision-making problems, standardization, according to Eq. (8.1), is one of the variants of linear normalization with a displacement. Considering that the sample size for each criterion for decision-making problems is not large, it is advisable to use a robust estimate for a typical value—the median instead of the mean value. The median is the “middle” of a sorted list of numbers [2]. The median is central to reliable statistics as it is the most reliable statistic with a split point of 50%. The main feature of the median in describing data as compared to the mean (often simply described as “average”) is that it is not skewed by a small fraction of extremely large or small values and therefore provides a better representation of the “typical” value. The median is central to robust statistics as it is the most reliable statistic with a breakpoint of 50%. Given that the sample size for each criterion for decision-making problems is small, it is advisable to use a robust estimate of the typical value—the median instead of the mean value. The Median is the “middle” of a sorted list of numbers [2]. The median is central to reliable statistics, as it is the most robust statistic, having a breakpoint of 50%. The basic feature of the median in describing data compared to the mean (often simply described as the “average”) is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of a “typical” value. The median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%. As a variant of stable standardized assessment in MCDM tasks, mIQR normalization is applicable for MCDM tasks: rij = where

aij - mdj aij , IQRj

ð8:4Þ

8.2

MS-Transformation of Z-Score

153

Fig. 8.1 Standardized values. Input data: decision matrix D0 [8×5] from Table 2.1

mdj = mediani aij ,

ð8:5Þ

IQRj = Q3j - Q1j = Q3j ð1 ≤ i ≤ mÞ - Q1j ð1 ≤ i ≤ mÞ,

ð8:6Þ

respectively, the median and interquartile range of values of jth attribute. Interquartile range is the difference between the 75th and 25th percentiles of the data, i.e. the interval that contains the “central” 50% of the data in the set. A graphical illustration of multivariate data standardization using Z-score and mIQR is shown in Fig. 8.1. In Fig. 8.1, the lower and upper whisker characterize the spread about the mean and median, respectively. In the case of a normal distribution, IQR = 2∙Φ-1(0.75) ∙σ 0, σ 0 ≈ 27/20∙σ, where σ is the standard deviation. Therefore, for the mIQR normalization method in Fig. 8.1, the spread of values is represented by the value s1, equal to s1 ≈ 20/27∙IQR.

8.2

MS-Transformation of Z-Score

When attributes are measured on different scales, they can be converted to Z-scores for easier comparison. The Z-score produces normalized values in multiples of the standard deviation with a mean value of 0. Therefore, the Z-score is used in many important applications to compare the attributes of features within the same population. The attraction of Z-standardization for solving MCDM problems is that in this case, the domains of normalized values are aligned on average and the interpretation of the scales of normalized values is the same. The numerical values of all attributes are measured in the standard deviation scale of each feature. This has the advantage that such normalized values differ only in properties other than variability, facilitating, for example, shape comparisons.

154

8

MS-Transformation of Z-Score

Table 8.1 Ranks of alternatives and relative rating gap when Z-score and mIQR are normalized. SAW and TOPSIS methods. Equal weights Normmethod Z-score mIQR Z-score mIQR

1Rank SAW 7 7 TOPSIS 7 7

2Rank

3Rank

Q1

Q2

Q3

dQ1%

dQ2%

dQ3%

8 8

6 6

0.581 0.335

0.252 0.075

0.093 -0.005

36.0 40.2

17.4 12.5

11.1 10.4

8 8

6 6

0.678 0.665

0.599 0.569

0.564 0.561

34.8 40.9

16.0 3.6

18.5 12.3

Similarly, the mIQR method produces normalized values in units of multiples of the interquartile range with a median of 0. Observed values above the mean have positive standard scores, while values below the mean have negative standard scores (Fig. 8.1), which in some cases contradicts the logic of data analysis in the multivariate case and is a shortcoming of standardized Z-scores. For example, when using WPM (Eq. 2.28) for attribute aggregation, negative standard scores are not allowed. Considering that the range of normalized values during standardization includes both positive and negative values, their compensation is possible when aggregating attributes (for example, for additive aggregation methods). However, if one transforms the standardized values into the region [0, 1] using linear transformations, then according to the invariant properties P.1-P.3 (see Chap. 4), both the attribute dispositions and the ranking of alternatives are preserved. For linear or homogeneous aggregation functions, the values of the performance indicators of alternatives change strictly monotonically, and compensation of positive and negative values does not affect the ranking. For the standardization example presented above in Fig. 8.1, the results of ranking by the SAW method and the TOPSIS method are as follows (Table 8.1). The disposition of the attributes and the ranking of the alternatives are preserved. Elimination of compensation of positive and negative values is achieved by transforming the normalized values. The transformation task is to transform the normalized values into the area [0, 1] under the condition: – the average values of all attributes are the same, – standard deviations of all attributes are the same. Below is an algorithm for linear transformation of standardized values using the fixed point technique—the MS-method (Mean & Standard deviation) [3]. Step 1. Perform the inversion (if necessary) of the cost attributes using the ReS-algorithm [5] for the original decision matrix D. Step 2. Standardize:

8.2

MS-Transformation of Z-Score

155

zij =

aij - aj , sj

ð8:7Þ

where āj, sj are the sample mean value and the sample standard deviation of the values of jth attribute, respectively. Step 3. Shift all attributes to positive values: uij = zij - min min i zij , j

ð8:8Þ

Step 4. Perform compression with a fixed coefficient (the second moment is invariant under compression) vij = uij = max max i uij : j

ð8:9Þ

Step 5. We perform the reduction of values to the scale of the normalization method: vij = vij  k,

ð8:10Þ

Step 6. Shift all values to 1 (top level) vij = vij þ 1 - max max i vij , j

ð8:11Þ

At step 5, the values are reduced to the scale of the normalization method (as in the case of the IZ-method [3, 4], Chap. 7). Therefore, the scale factor k (0 < k ≤ 1) is defined similarly: k = Z - I:

ð8:12Þ

As a result, we obtain new normalized values vij 2 [0, 1]. A step-by-step illustration of MS-transformation of normalized values is shown in Fig. 8.2. In Fig. 8.2 for normalization at step 5, the interval [I, Z] of the Max normalization method was used: the range of values of the new scale is defined as I = mean(rjmin), Z = mean(rjmax) = 1 (Sect. 8.3 below). The second and fifth criteria are cost criteria and require value inversion, which is performed using the ReS-algorithm. The upper and lower whiskers in Fig. 8.2 characterize the spread of the normalized values about the mean. A similar MS-transformation algorithm is implemented for data transformation of any centered value, and such an implementation is performed for the mIQR normalization method. A step-by-step illustration of the MS-transformation of normalized values by the mIQR method is shown in Fig. 8.3.

156

8

MS-Transformation of Z-Score

Fig. 8.2 Step-by-step MS-transformation for Z-score normalized values. Decision matrix D0

Fig. 8.3 Step-by-step MS-transformation for mIQR normalized values. Decision matrix D0

The upper and lower whiskers in Fig. 8.3 characterize the spread of the normalized values about the median. Thus, the MS-transformation procedure allows you to set the necessary proportions between the scales of various attributes, perform scaling, and equalize the range of normalized values while maintaining equality: – average values of all attributes, – standard deviation of all attributes. The MS-transformation is a linear method with a displacement (uij=krij+b) and all the invariant properties of linear transformations P.1–P.4 presented in Sect. 4.3

8.3

Selecting a Conditionally Common Scale [I, Z] for MS-Transformation

157

are satisfied for it. In particular, the invariant property P.1 is important, according to which the MS-transformation preserves the dispositions of natural values. The choice of the transformation interval [I, Z] according to the invariant property P.2 does not affect the ranking result if one of the linear methods is used for normalization, and a linear (SAW,. . .) or homogeneous function (TOPSIS,. . .) is used as an aggregation function. Under such conditions, the results of ranking using the MS-method are the same for all options for choosing the scale [I, Z], including [0, 1]. The MS-transformation (and the choice of the scaling factor k) is relevant only if non-linear aggregation methods are used. In the case of attribute aggregation using a linear or homogeneous function, the ranking results are the same as the ranking results when using Z-score normalization. Thus, when MS-transformation of standardized values into the set [0, 1], the mean values and variances are the same for all attributes, i.e. the domains of the normalized values flatten out on average. Additionally, the choice of the measurement scale is set in accordance with the selected normalization method. The exclusion of negative Z-scores allows you to expand the list of methods in decision-making. For example, it becomes valid to use WPM, WASPAS methods. In the case of using non-linear aggregation procedures of the attributes of alternatives (e.g., WPM, WASPAS) or non-linear normalization such as log(a), the choice of the [I, Z] interval affects the ranking result. This effect is illustrated by the examples presented in Sect. 8.5 below. The MS-method provides for non-linear aggregation methods the choice of a conditionally general normalization scale [I, Z], which has the same interpretation of the normalized values as the main linear normalization methods.

8.3

Selecting a Conditionally Common Scale [I, Z] for MS-Transformation

The key feature of the MS-method is the choice of a common scale of normalized values that is consistent for all attributes and has the same interpretation. This makes it possible to aggregate normalized values of the same order not only on a scale that is a multiple of the standard deviation, but also to interpret the scales as, for example, fractions of the best value. This is achieved by the scale compression operation. There is an uncertainty in the choice of the area [I, Z] common for all attributes of normalized values, which is also due to the possibility of binding to the scale of a particular attribute. Uncertainty is also due to the shift of domains of different attributes relative to each other (Fig. 8.1). A meaningful interpretation of the normalized values for the main linear normalization methods was presented above in Sect. 4.5. In accordance with this, MS-normalizations will have the same interpretation if the values of the boundaries of the [I, Z] region are chosen the same as for the region of normalized values of the corresponding linear normalization method.

158

8

MS-Transformation of Z-Score

MS-normalization will correspond to the proportion of the attribute of the ith alternative relative to the largest attribute value (normalization method Max), if “I” and “Z” are characteristic values of the boundaries of the region of change rij=Max (aij). A similar approach is used for other linear normalization methods rij=Sum(aij), rij=Vec(aij), etc. In this case, the values of the MS-transformation will be interpreted, respectively, as the intensity of the feature of the ith alternative and as the share of the feature relative to the diameter of the m-dimensional rectangle constructed from the feature values of all alternatives, and so on. In accordance with this, different IZ normalizations will be denoted as successive normalizations MS-Max, MS-Sum, MS-Vec, etc. The lower (similarly, upper) limit of normalized values for n attributes can be different: min min = r min 1 , r2 , . . . , rn

max max = rmax 1 , r2 , . . . , rn

min r i1 , min r i2 , . . . , min r in : i

i

i

max r i1 , max r i2 , . . . , max r in : i

i

i

ð8:13Þ ð8:14Þ

The choice of the transformation region [I, Z] was carried out in complete analogy with the choice for the IZ-method (Sect. 7.5). The following rational options are proposed for use: 1. As “I”, take the smallest (worst) value of the lower level of alternatives for all criteria, and as the value of “Z”, take the largest (best) value of the upper level max : I 1 = min r min j , Z 1 = max r j j

j

ð8:15Þ

In this case, the ranking will be carried out taking into account the influence of “strong alternatives” as much as possible. This is due to the fact that the range of values of alternatives for all criteria increases and the values of the “lower” level of alternatives become more distant from the values of the “upper” level of alternatives for each of the criteria. 2. As “I”, take the largest (best) value of the lower level of alternatives for all criteria, and as the value of “Z”, take the largest (best) value of the upper level max , I 2 < Z2: I 2 = max r min j , Z 2 = max r j j

j

ð8:16Þ

In this case, the ranking will be carried out taking into account the influence of “weak alternatives” as much as possible. This is due to the fact that the range of values of alternatives for all criteria decreases and the values of the “lower” level of

8.4

Invariant Properties of MS-Transformation

159

Fig. 8.4 MS-transformations for the Z-score of normalized values for various choices of fixed boundaries of the [I, Z] domain. (3): I = mean(min(V )), Z = mean(max(V )). Decision matrix D0

alternatives become close to the values of the “upper” level of alternatives for each of the criteria. 3. As “I”, take the average value of the lower level of alternatives for all criteria, and as the “Z” value, take the average value of the upper level max , I 3 < Z3: I 3 = mean r min j , Z 3 = mean r j

ð8:17Þ

In this case, the scales agree within the standard deviation for the mean. 4. As “I” I, take the median value of the lower level of alternatives for all criteria, and as the value of “Z”, take the median value of the upper level max , I 4 < Z4: I 4 = median r min j , Z 4 = median r j

ð8:18Þ

In this case, the scales are consistent within the standard deviation of the median. The following options are also justified: [I1, minj rjmax] и [I2, minj rjmax]. A choice option is also possible, determined by the context of the decision-making problem, in which the interval [I, Z] is determined by the expert: 0 ≤ I5 ≤ Z5 ≤ 1. An illustration of data normalization using MS-transformation for various choices [I, Z] is shown in Fig. 8.4.

8.4

Invariant Properties of MS-Transformation

The MS-method is a linear data transformation. Therefore, for the MS-method, all properties are satisfied under linear transformations (Sect. 4.3):

160

8

MS-Transformation of Z-Score

Property 1. The disposition of values is invariant under a linear transformation. Property 2. Linear transformation of all scales uij=k∙rij+b with fixed coefficients k and b does not change the ranking if a linear function (e.g., SAW) is used to aggregate attributes. Property 3. For a homogeneous aggregation function (for example, TOPSIS, GRA), the performance indicators of alternatives are invariant under a linear transformation with fixed coefficients (uij=k∙rij+b). Consequence: The result of ranking alternatives when using the MS-transformation is the same for any variant of the Norm() normalization method and the choice of the [I, Z] scale, if a linear or homogeneous function is used for aggregation (SAW, TOPSIS, GRA) and is identical to the result, as in the case of applying the Z-score normalization method. For the standardization example presented above in Fig. 8.3, the results of ranking by the SAW method and the TOPSIS method do not differ (Table 8.2).

Table 8.2 Ranks of alternatives and relative rating gap after MS-transformation of Z-scores. SAW and TOPSIS aggregation methods. Equal weights Norm-method Z-score MS-Max(3) MS-Sum(3) MS-Vec(3) MS-Max,Min (3) MS-dSum(3) MS-Z(3) Z-score MS-Max(3) MS-Sum(3) MS-Vec(3) MS-Max,Min (3) MS-dSum(3) MS-Z(3)

1Rank SAW 7 7 7 7 7

2Rank

3Rank

Q1

Q2

Q3

dQ1%

dQ2%

dQ3%

8 8 8 8 8

6 6 6 6 6

0.581 0.897 0.984 0.957 0.700

0.252 0.866 0.980 0.943 0.610

0.093 0.851 0.977 0.937 0.566

36.0 36.0 36.0 36.0 36.0

17.4 17.4 17.4 17.4 17.4

11.1 11.1 11.1 11.1 11.1

7 7 TOPSIS 7 7 7 7 7

8 8

6 6

0.917 0.760

0.891 0.688

0.879 0.653

36.0 36.0

17.4 17.4

11.1 11.1

8 8 8 8 8

6 6 6 6 6

0.678 0.897 0.678 0.678 0.678

0.599 0.866 0.599 0.599 0.599

0.564 0.851 0.564 0.564 0.564

34.8 36.0 34.8 34.8 34.8

16.0 17.4 16.0 16.0 16.0

18.5 11.1 18.5 18.5 18.5

7 7

8 8

6 6

0.678 0.678

0.599 0.599

0.564 0.564

34.8 34.8

16.0 16.0

18.5 18.5

8.5

MS-Transformations for Non-linear Aggregation Methods: Example for WPM and. . .

161

SAW ratings (Qi) differ by the amount of scaling, and relative rating gap (dQi) does not change with scaling. For a homogeneous aggregation function, the results are indifferent to the scaling procedure. The example demonstrates the invariance of ranking in a linear transformation if a linear or homogeneous function is used as an aggregation function. In the case of a non-linear aggregation function, the choice of the boundaries of the [I, Z] domain affects the ranking of the alternatives. Relevant examples are presented below in Sect. 8.5. Also, some feature aggregation methods, for example, WPM, WASPAS do not handle negative values.

8.5

MS-Transformations for Non-linear Aggregation Methods: Example for WPM and WASPAS Methods

This section presents the results of ranking the alternatives of the decision matrix Dq, which is highly sensitive to the parameters of the MCDM model—the normalization method and the aggregation method. Four normalization methods were used for the analysis: Max, Sum, Vec, dSum, and two non-linear aggregation methods: WPM, WASPAS. The results are needed to analyze the impact of MS-transformations on ranking. The conditionally general scale [I, Z] for each normalization method is chosen as the average value according to option (3), described above in Sect. 8.3. Therefore, there are four different variations of the IZ-method: MS-Max(3), MS-Sum(3), MS-Vec(3), MS-dSum(3), which will be used in conjunction with two non-linear aggregation methods. The decision matrices Dq were generated in accordance with the methodology described in Sect. 6.4. This example assumes that the attribute weights are the same. WPM Method of Aggregation: The algorithm of the WPM method excludes the processing of zero and negative feature values. Therefore, the Z-score normalization method cannot be integrated into the model structure together with WPM. In accordance with the technique described above, a decision matrix D1 that is sensitive to the change in rating from normalization is obtained:

D1 = aij =

5832:2

73:8 575:2

169:5

1467:9

5158:0 6009:5

78:3 507:4 80:6 473:6

163:2 153:1

1736:8 1513:7

4777:1

81:5 515:4

170:1

1809:9

5883:0 5622:8

71:4 492:1 80:5 457:0

170:9 162:2

2124:0 1952:4

6430:6 6047:4

75:3 665:2 78:7 581:0

154:0 162:5

1116:4 1269:5

:

ð8:19Þ

162

8

MS-Transformation of Z-Score

Fig. 8.5 Rank reversal after MS-transformation. WPM method. Decision matrix D1 by Eq. (8.19) Table 8.3 Ranks of alternatives and relative gap of rating at MS-transformation. Decision matrix D1. WPM method, equal weights Norm-method Max Sum Vec dSum MS-Max(3) MS-Sum(3) MS-Vec(3) MS-dSum(3)

1-Rank 3 3 3 8 8 5 4 8

2-Rank 8 8 8 3 4 4 5 4

3-Rank 7 7 7 6 5 8 8 5

Q1 0.919 0.132 0.372 0.910 0.874 0.981 0.947 0.867

Q2 0.918 0.132 0.371 0.893 0.874 0.981 0.947 0.866

Q3 0.894 0.129 0.362 0.892 0.873 0.981 0.947 0.866

dQ1% 1.4 1.4 1.4 28.9 3.7 0.3 0.2 4.3

dQ2% 27.2 27.2 27.2 1.5 1.1 3.7 1.5 1.2

dQ3% 4.7 4.7 4.7 9.9 5.6 8.7 9.3 5.0

Domains of normalized values and the numbers of the alternatives of rank 1-3 for the WPM aggregation method are shown in Fig. 8.5. For four different variations of the MS-method: MS-Max(3), MS-Sum(3), MS-Vec(3), MS-dSum(3), the alternatives of 1st rank have the following numbers: 8, 5, 4, 8, respectively. The mark (3) in the method name indicates the choice of the boundaries of the conditionally general scale [I, Z] as the average value. The differences in ranking are primarily due to the high sensitivity of the decision matrix. This sensitivity is determined by the low values of the relative efficiency index of alternatives dQp of I-III ranks, the values of which are presented in Table 8.3. When using non-displacement normalization methods (Max, Sum, Vec) in conjunction with the WPM method, the performance score of the alternatives is scaled by the same factor. Therefore, the ranking of alternatives for these normalization methods is the same.

8.5

MS-Transformations for Non-linear Aggregation Methods: Example for WPM and. . .

163

Fig. 8.6 Rank reversal after MS-transformation. WASPAS method. Decision matrix D2 by Eq. (8.21)

n

k  r ij

Qi uij = j=1

= k  Qi r ij :

wj

n

=

n

kwj  r ij wj = j=1

n

k wj  j=1

n

r ij wj = k  j=1

r ij wj j=1

ð8:20Þ

The relative indicator dQ does not change either. The dQp is the relative (given in the Q scale) gain or loss of the performance score for an ordered list of alternatives. According to the table, the relative rating gap is less than 1% for alternatives of I–III ranks. Analysis of the results does not reveal the priority of any normalization method. This indicates that the result of ranking alternatives, in addition to the normalization method, is significantly influenced by an additional factor. Such a factor is the local priorities of alternatives for various attributes, determined by the initial values of the decision matrix. The correct outcome of such an analysis is that alternatives A3, A8 and A5 are recommended to the decision maker for decisionmaking. Similar results (Fig. 8.6, Table 8.4) were obtained for the WASPAS aggregation method with decision matrix D2 by Eq. (8.21). WASPAS Method of Aggregation: The algorithm of the WASPAS method is formed by a combination of WSM and WPM, and, like WPM, excludes the processing of zero and negative feature values. In accordance with the technique described above, a decision matrix D2 that is sensitive to the change in rating from normalization is obtained:

164

8

MS-Transformation of Z-Score

Table 8.4 Ranks of alternatives and relative gap of rating at MS-transformation. Decision matrix D2. WASPAS method, equal weights Norm-method Max Sum Vec dSum MS-Max(3) MS-Sum(3) MS-Vec(3) MS-dSum(3)

1-Rank 3 3 3 3 2 5 7 2

D2 = aij =

2-Rank 1 1 1 1 7 7 5 7

3-Rank 2 2 2 2 5 2 2 5

Q1 0.944 0.136 0.381 0.937 0.891 0.984 0.955 0.890

Q2 0.925 0.133 0.372 0.929 0.891 0.984 0.955 0.890

Q3 0.910 0.130 0.366 0.929 0.890 0.984 0.955 0.889

dQ1% 13.1 14.3 14.2 5.9 0.4 0.5 0.0 0.4

5829:8 5546:3

80:5 454:2 84:7 502:6

164:8 163:5

1748:8 1736:5

5837:6

84:2 511:7

150:5

1242:4

4983:8 6203:6

72:7 613:8 84:2 465:6

171:6 139:0

1936:5 2322:3

4547:0 5923:4

75:7 642:5 77:6 618:8

167:2 140:9

1913:7 2202:9

4323:6

82:5 609:5

144:0

1406:8

:

dQ2% 10.2 10.1 10.2 0.1 1.4 1.1 0.7 1.4

dQ3% 39.4 39.8 39.7 31.6 26.1 25.7 26.4 26.0

ð8:21Þ

The results of normalization and ranking by the WASPAS method are shown in Fig. 8.6. For four different variations of the MS-method: MS-Max(3), MS-Sum(3), MS-Vec(3), MS-dSum(3), the alternatives of 1st rank have the following numbers: 2, 5, 7, 2, respectively. The mark (3) in the method name indicates the choice of the boundaries of the conditionally general scale [I, Z] as the average value. The differences in ranking are primarily due to the high sensitivity of the decision matrix. This sensitivity is determined by the low values of the relative efficiency index of alternatives dQp of I–III ranks, the values of which are presented in Table 8.4. Since WASPAS is formed by a linear combination of WSM and WPM methods, when using normalization methods without displacement (Max, Sum, Vec) in conjunction with the WASPAS aggregation method, the performance indicator of alternatives is scaled with the same factor. Therefore, the ranking of alternatives for these normalization methods is the same. The first three normalization methods are linear and represent a class of anisotropic normalizations or conventionally identical scales. The rating results based on the class of anisotropic normalizations Max, Sum, Vec are the same. This allows us to conclude only that the decision matrix has little sensitivity to the choice of the normalization method from this class. Z-score-based ranking results are significantly different from the ranking results for the anisotropic normalization class. The rating

8.6

Conclusions

165

of alternatives changes depending on the choice of the interval [I, Z] in accordance with the choice of the first step normalization method. This allows us to conclude only that the decision matrix has a high sensitivity to the choice of the normalization method from this class. Distinguishability of the rating of alternatives is weak. In such situations, it is recommended that the decision maker chooses both alternatives. We also note the high sensitivity of the ranking to the estimates of the decision matrix. The values of attributes in the decision matrix (8.19), (8.21) are given with an accuracy of tenths. When rounded to an integer value, the ranking result will be slightly different. Comparison of the values of the relative performance indicators of alternatives dQ for the considered pairs of “aggregation-normalization” does not allow us to identify the priority of any method of normalization. This indicates that the result of ranking alternatives, in addition to the normalization method, is significantly influenced by an additional factor. According to the author, such a factor may be the local priority of alternatives for various attributes (ratios of priorities of different alternatives for various attributes), determined by the initial values of the decision matrix. A discussion of this problem and the generation of high-sensitivity problems are presented in Chap. 6 above. Thus, in the case of using non-linear procedures for aggregating attributes of alternatives, the choice of the interval [I, Z] affects the ranking result. MS-transformation provides for non-linear aggregation methods the choice of a conditionally general normalization scale [I, Z], which has the same interpretation of normalized values as the main linear normalization methods.

8.6

Conclusions

For Z-score normalization, the numerical values of all attributes are measured in the standard deviation scale of each feature. The attraction of Z-standardization for solving MCDM problems is that in this case, the domains of normalized values are aligned on average and the interpretation of the scales of normalized values is the same. MS-transformations of standardized values to the set [0, 1] are proposed, for which the mean values and variances are the same for all attributes, and the domains of normalized values are averaged out. Additionally, the choice of the measurement scale is set in accordance with the selected normalization method. The exclusion of negative Z-scores allows you to expand the list of methods in decision-making for the transformed data. For example, it becomes valid to use WPM, WASPAS methods. The MS-transformation preserves the dispositions of the normalized values and ensures that the contributions into the performance indicator of alternatives of various criteria are same (on average). The choice of a conditionally general scale does not affect the rating if the feature aggregation method is linear or if the feature aggregation method uses a homogeneous function.

166

8

MS-Transformation of Z-Score

The MS-method is relevant for non-linear aggregation methods and provides the choice of a conditionally general normalization scale that has the same interpretation of the normalized values as the main linear normalization methods. MS-transformation is applicable to data transformation of any centered value, and such an implementation is made for the mIQR normalization method, which expands the list of normalization methods with an adequate interpretation of normalized scales. The degree of differentiation of the final rating of alternatives is determined primarily by the local priorities of the alternatives and weakly depends on the choice of the normalization method. If the degree of differentiation is low, then the alternatives should be considered indistinguishable. The effectiveness of the MS-method is postulated by a set of positive properties that determine the basic principles (required properties) of multidimensional data normalization.

References 1. Standard_score. (2022, May 28). In Wikipedia. https://en.wikipedia.org/wiki/Standard_score 2. Median. (2022, May 28). In Wikipedia. https://en.wikipedia.org/wiki/Median 3. Mukhametzyanov, I. Z. (2023). Elimination of the domain’s displacement of the normalized values in MCDM tasks: The IZ-method. International Journal of Information Technology and Decision Making. https://doi.org/10.1142/S0219622023500037 4. Mukhametzyanov, I. Z. (2023). On the conformity of scales of multidimensional normalization: An application for the problems of decision making. Decision Making: Applications in Management and Engineering. https://doi.org/10.31181/dmame05012023i 5. Mukhametzyanov, I. Z. (2020). ReS-algorithm for converting normalized values of cost criteria into benefit criteria in MCDM tasks. International Journal of Information Technology and Decision Making, 19(5), 1389–1423. https://doi.org/10.1142/S0219622020500327

Chapter 9

Non-linear Multivariate Normalization Methods

Abstract This chapter describes various functional transformations combined with a data normalization procedure in order to limit the influence of heterogeneities on the final ranking of alternatives in multi-criteria decision-making problems. The methods of non-linear data transformation are presented in two versions: pre-processing of initial data and post-processing of normalized values. Such a procedure can be caused by the presence of non-typical values in the data. The goal is to minimize the impact of atypical values on the rating of alternatives. The end result of successive transformations (both linear and non-linear) of the original data is their mapping onto the set [0, 1]. One of the transformation stages is the reduction of attributes to a dimensionless form based on linear normalization methods. A non-linear transformation allows you to redefine proportions (distances) between attribute values for different alternatives. Therefore, the entire chain of transformations can be defined as a non-linear normalization. Keywords Multivariate normalization · Non-linear methods · Asymmetry · Pre- and post-nonlinear processing of data

9.1

Non-linear Data Transformation as a Way to Eliminate Asymmetry in the Distribution of Features

Asymmetry in the distribution of data is one of the serious problems in solving problems of multi-criteria choice. The skew in the distribution of features can cause the priority of the contribution of individual features to the performance indicator of alternatives during aggregation. For example, if the values of one of the features are concentrated at the top level, it is obvious that the additive contribution (SAW) of this feature to the rating of the alternative will be higher. The contribution based on the distance to the critical link (TOPSIS) will also be higher. Although skew in the distribution may be due to natural causes (including atypical values), relationships between different features will be broken. This difference is hidden, which requires careful analysis.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3_9

167

168

9

Non-linear Multivariate Normalization Methods

Since we are considering the problem of multidimensional normalization, it is necessary to clarify for which attributes “asymmetry” takes place. To do this, we use three measures of skewness of a discrete dataset—the sample skewness, the nonparametric skew, and the median couple (Sect. 3.3.1). All three measures are linear transformation invariants, equal to zero for a symmetric distribution of X, and are odd functions when the distribution is inverted. If there is more than one such attribute, then whether the distribution of the individual attributes is the same or different. Another difficulty in the analysis of asymmetry is due to the specifics of the MCDM sample: in decision-making problems (and not only), the alternatives chosen for analysis represent an available set of alternatives, the attributes of which can take values that do not reflect the entire set of possible alternatives. Therefore, the distribution of observed features may differ from the distribution of features of the entire population. In such a situation, the provisions of the sampling theory cannot be used. The different choices of available alternatives can cause the distribution to be skewed. In decision-making problems, the amount of available data is not large, which limits the use of rigorous mathematical or statistical methods for screening. But even the simplest of approaches—subjective (based on the inner feelings of the researcher)—can bring significant benefits. If a preliminary analysis of the data shows the presence of anomalous (not typical) values, it is necessary to perform data processing. The stage of data pre-processing includes procedures for identifying “outliers,” filtering out anomalous and recovering missing values. It is necessary to establish how strong the influence of “asymmetry” is on the result of solving the problem, and should the “asymmetry” be eliminated? This question relates more to a specific task and is not formalized. In the absence of truth criteria, it is necessary to solve the ranking problem using the basic normalizations for the original data and for the transformed data and compare the results. If the results are different, then the issue of final decision-making remains the prerogative of the decision maker. In order to limit the influence of homogeneities, or to eliminate it altogether, there are many different approaches. This process determines one of the important areas of statistics—the development of robust methods and robust estimates. The main task of robust methods is to distinguish a “bad” observation from a “good” one and to offer data processing methods that are resistant to atypical values. Among them, there are three main research paths [1]: • data grouping without deleting individual observations (to reduce the possibility of sample damage by individual outliers). After that, with a sufficient degree of confidence, it is permissible to use the classical methods of statistics, • tracking emissions directly in the analysis process, for example, in the process of determining the parameters of the distribution law, • functional transformation of data, which is based on the hypothesis about the distribution of the feature. In decision-making problems, the amount of available data is not large, which limits the use of rigorous mathematical or statistical methods for screening.

9.1

Non-linear Data Transformation as a Way to Eliminate Asymmetry in. . .

169

Therefore, for MCDM tasks, the functional transformation of data becomes more important. Even the simplest of approaches—subjective (based on the inner feelings of the researcher)—can bring significant benefits. If a preliminary analysis of the data shows the presence of anomalous (not typical) values, it is necessary to perform data processing. In the case of multivariate normalization, the distributions of attributes are independent and may differ significantly. This entails the need to use different transformations for different features. As a consequence, this entails the need to harmonize the various scales with each other. On the one hand, the use of multidimensional data transformation has a positive result, since it allows to partially eliminate the “asymmetry” in individual scales. On the other hand, there is a need to harmonize the various scales with each other. In both cases, the consequences in the form of the result of solving the choice problem may differ, in the absence of truth criteria. There are several options for further data processing: • if the distribution of data for “anomalous” attributes is different, then whether it is necessary to use different data transformations, • whether the data should be pre-processed for “non-anomalous” attributes in order to agree on the scales. In accordance with the results of the third chapter, the use of linear transformations during normalization cannot eliminate “skewness,” since the dispositions of values for each attribute are preserved. If, however, a non-linear transformation is used, then the distribution of normalized values within the domain changes in comparison with the distribution of natural values. It seems that the use of non-linear methods is impractical. However, if most of the data is concentrated in a small interval, then the error in approximating a non-linear function using a linear function will be negligible. This means that the normalized values when using non-linear normalization will contain approximately the same information about the structure of the original data as when using linear normalization. Non-linear data transformation is performed in two versions: • pre-processing of initial data, • post-processing of normalized values. During pre-processing of data, a non-linear transformation of the scale of natural values of individual attributes is performed using non-linear transformations and then the subsequent normalization of the transformed data: r ij = Norm f j aij :

ð9:1Þ

For example, the well-known logarithmic normalization [2] applied to all attributes has the form:

170

9

Non-linear Multivariate Normalization Methods

r ij = Sum log aij ,

ð9:2Þ

where Sum is the Sum method of normalization by Eq. (4.3) in Table 4.1. This makes it possible to partially eliminate the “asymmetry” in individual scales. This operation can significantly change the contribution of the transformed attribute values to the overall result and therefore requires justification. During post-processing of data, a non-linear transformation of the normalized values of all or part of the attributes is performed. r ij = f j Norm aij :

ð9:3Þ

This operation changes the relative distances between the normalized attribute values for different alternatives and changes the contribution of individual features to the performance indicator of alternatives. Keep in mind that multidimensional data pre-processing: • based on hypotheses about the distribution of data, • eliminates “asymmetry” only partially, • can have a significant impact on the decision in the absence of truth criteria. The main requirement for data transformation algorithms is formulated as follows: conclusions drawn on the basis of data measured in a scale of a certain type should not change with an acceptable transformation of the measurement scale of these data. In other words, the conclusions must be invariant with respect to the allowed scale transformations. How to compare the original population and the transformed data population? The simplest way is by averages, however, various types of averages are known: arithmetic mean, median, mode, geometric mean, harmonic mean, mean square. With an acceptable scale transformation, the value of the mean obviously changes. But the conclusions about for which population the average is greater, and for which it is less, should not change (in accordance with the requirement of invariance of the conclusions). This recommendation is in line with the concept of sustainability, which recommends using different methods to process the same data in order to highlight the findings that are obtained simultaneously from all methods. In accordance with the general principles of data normalization outlined in the second chapter, the main requirement for choosing a transformation function is to preserve the pre-ordering of the data. An ordered dataset must retain the same ordering after transformation. This property is provided by a transformation using strictly monotone functions. However, the transformation based on strictly monotonic functions ensures the invariance of the properties of the set only within the framework of one measurement scale. Since the values of the attributes of each alternative (as well as average values, deviations of values from the “ideal,” etc.) within each scale change during data transformation, then when aggregating the attributes of each of the alternatives, the performance indicators of alternatives can change the initial (before transformation) ordering. Thus, the ranking of alternatives

9.2

Non-linear Data Pre-processing Procedures. Transition to the Non-linear Scales

171

explicitly depends on the non-linear normalization method. The same conclusion was made when using linear normalization methods for rank decision models. The subjectivism of the researcher to the choice of the method of processing the initial data apparently corresponds to the reality of decision-making problems.

9.2

Non-linear Data Pre-processing Procedures. Transition to the Non-linear Scales

This section presents pre-processing procedures based on non-linear transformations that allow you to purposefully change the data skewness. A non-linear transformation allows you to redefine proportions (distances) between attribute values for different alternatives. The measurement scale is an ordered set of manifestations of quantitative or qualitative characteristics of objects, as well as the objects themselves. In this section, we will consider the scale of values of a quantitative characteristic, for which the unit of measurement, the presence of a natural zero, and the direction of change are determined (by agreement). Such a scale allows functional transformation, which leads to a change in the type of scales. For example, logarithmic scales have gained practical distribution based on the use of systems of decimal and natural logarithms, as well as logarithms with a base of two. The choice of function for data transformation and (base of the logarithm or exponent) is determined based on the hypothetical distribution of attribute values. A mandatory requirement for such functions is strict monotonicity. This allows the natural ordering of attribute values to be preserved. The logarithmic normalization known from numerous publications [2] actually represents a preliminary data transformation using a logarithmic function and subsequent linear normalization using the Sum method: rij =

log aij m i=1

= SumðlogðaÞÞ:

ð9:4Þ

log aij

In accordance with this, one can generalize the logarithmic normalization in the form: r ij = Norm log aij ,

ð9:5Þ

where Norm() is one of the linear normalization methods. Note that the compression factor for linear normalization is of the same order as the normalized value. Therefore, the choice of the base of the logarithm in the logarithmic transformation does not affect the result of linear normalization.

172

9

Non-linear Multivariate Normalization Methods

To eliminate the ambiguity of the term “logarithmic normalization,” it is correct to use the term data pre-processing or non-linear transformation and use the following general notation: rij = Norm f aij ,

ð9:6Þ

which is relevant for any admissible transformations of the source data and for any normalization algorithms. The Norm() operation is mandatory, since it converts the values of all attributes to a dimensionless form, which allows them to be compared or aggregated later. The use of non-linear transformations makes it possible to partially eliminate the “asymmetry” in individual scales. In the case of applying linear normalization in formula (9.6), the skewness coefficient is an invariant of the linear transformation, and the skewness depends only on the choice of the transformation function f, i.e. skew Norm f aij

= Norm skew f aij

,

ð9:7Þ

The above features demonstrate an example of transformation-normalization for a set of values: a = (12, 16, 21, 65, 120). The last value of the series is an order of magnitude greater than the first and can be identified as an outlier. Before normalization, we will switch to new scales—logarithmic and power (a0.5). Next, we perform normalization using the six main linear normalization methods presented in Table 4.1 (Max, Sum, Vec, Max-Min, dSum, Z-score). Graphical results of transformation-normalization are presented in Fig. 9.1.

Fig. 9.1 An example of transformation-normalization for the set of values a = (12, 16, 21, 65, 120). One-dimensional case

9.2

Non-linear Data Pre-processing Procedures. Transition to the Non-linear Scales

173

Table 9.1 Skewness coefficient changes during logarithmic and power-law transformations of initial data γ Sk a

a 1.314 0.559

δ1, %a 51 25

a0.5 0.977 0.509

log(a) 0.641 0.415

δ2, % 25 9

δ1=(a–log(a))/a100, δ2=(a–a0.5)/a∙100

The size and position of domains on the segment [0, 1] are determined only by the normalization method, and the dispositions of values are determined only by the transformation function. Therefore, the asymmetry γ of the transformed data is the same for all normalization methods, and changes only when the transformation function is chosen. For the example under consideration, the coefficient of skewness of the transformed attribute values is shown in Fig. 9.1. Logarithmic and power transformations significantly change the proportions between values toward the initial values in the ascending list, which leads to a significant decrease in the skewness factor γ according to formula (2.21) and the nonparametric skew Sk according to formula (2.23). The results are presented in Table 9.1. As the results of this example show, the choice of the transformation function f can significantly affect the result. For example, using the logarithmic function log() or the power function can result in significant changes in the proportions of the normalized values for the original and transformed data. As noted in the second chapter, for multivariate normalization, the main problem is to agree on the scales of normalized values. Therefore, it is necessary to estimate how much the dispositions of values and the shift of domains of normalized values of various attributes relative to each other will change after preliminary data transformation. Let us illustrate the relative changes in domains after data pre-processing during multivariate normalization using a test example of normalization of the following decision matrix D1:

D1 = aij =

12

134

3200

21

205

3100

16 65

154 900

1400 1200

120

320

1100

:

ð9:8Þ

As in the case of the first example, there are data outliers for the values of the first two attributes, and insignificant data skewness for the third attribute. Relative changes in the position of domains and normalized values after data pre-processing are shown in Fig. 9.2. Both applied transformations compress the data, shrink the domains, and somewhat change their relative position. In some cases, this may lead to a change in the ranking of alternatives when aggregating. A non-typical data change takes place for

174

9

Non-linear Multivariate Normalization Methods

Fig. 9.2 Relative changes in the position of domains and normalized values after data pre-processing during multivariate normalization

Fig. 9.3 Changes in dispositions in domains after data pre-processing during multivariate normalization

Max-Min normalization. The lower and upper level values do not change (they are equal to 0 and 1, respectively). This increases the distance between the upper (lower) level and its neighboring values, which may also lead to a change in the rating of alternatives during aggregation. The dispositions of values within each domain also undergo slight changes (Fig. 9.3). It is small variations in attribute values after data transformation that are an important argument in favor of pre-processing. Another important argument in favor of pre-processing is that as a result of transformation (pre-processing) there is a rather significant (up to 50%) change in the skewness coefficient γ for attributes (Table 9.2) for which there were data outliers [attributes C1 and C2 in the decision

9.2

Non-linear Data Pre-processing Procedures. Transition to the Non-linear Scales

Table 9.2 Skewness coefficient changes during the transformation of the original data

a log(a) a0.5 δ1=(a–log(a))/a100, % δ2=(a–a0.5)/a∙100, % a log(a) a0.5 δ1=(a–log(a))/a100, % δ2=(a–a0.5)/a∙100, %

175

C2 C3 C4 C1 Skewness coefficient, γ 1.31 1.97 0.57 0.54 0.64 1.33 0.49 0.45 0.98 1.71 0.54 0.50 51 33 13 17 26 13 6 8 непараметрический скос, Sk 0.56 0.43 0.57 0.42 0.42 0.32 0.47 0.36 0.51 0.39 0.52 0.39 26 26 18 14 9 10 8 7

matrix according to (9.8)]. For the attributes C3 and C4 (data without outliers), the changes in the skewness coefficient are not so significant and amount to about 10–15%. The values for the nonparametric skew (Sk) are less significant, due to the stability of the median value at transformations. In the case of multivariate normalization, the distributions of attributes are independent and may differ significantly. This entails the need to use different transformations or different data pre-processing for different features. As a consequence, this entails the need to harmonize the various scales with each other. Thus, the use of pre-processing of multidimensional data, on the one hand, has a positive result, since it allows to partially eliminate the “asymmetry” in individual scales, on the other hand, it becomes necessary to harmonize different scales with each other. In both cases, the consequences in the form of the result of solving the choice problem may differ, in the absence of truth criteria. Below is a general scheme of data pre-processing: Step 1. Form hypotheses regarding the laws of feature distribution based on the analysis of the decision matrix (data in columns). Step 2. Estimate the distribution parameters. Step 3. For each of the features, evaluate the “asymmetry” and make a decision on the transformation of values. Step 4. Based on the results of the analysis in the previous steps 1–3, for each of the features, we determine a function for transforming values. Step 5. Evaluate the parameters of the transformation function according to the criterion of minimizing the “asymmetry” index. Step 6. Perform data transformation. Step 7. Analysis changes based on the results of data transformation. The evaluation of distribution laws and distribution parameters at the first and second stages is performed on the basis of statistical analysis. In the absence of a representative sample, these steps can be skipped.

176

9

Non-linear Multivariate Normalization Methods

The decision on data pre-processing, the choice of features and functions for transformation are poorly formalized. Therefore, the correctness of the transformations is based on the results of step 7. When pre-processing the data, it is necessary to take into account that if for the feature range in the interval [1, 1) and the transformation shifts the right tails to the left (compression to zero), for example, as for the function a0.5, then for the feature range from the interval [0, 1) the same transformation shifts the data to the right (stretching from zero). This requires additional analysis of possible consequences. Another problem is the handling of negative feature values (if any), for example, when using a logarithmic transformation or irrational transformation functions. Considering that the scales of feature measurements can vary significantly, it becomes necessary to select a transformation function for pre-processing data for each of the attributes when solving each task. This complicates the study and determines the search for a rational procedure.

9.3

Transformation of Normalized Data: Post-processing of Data

This section proposes a unification of data transformation based on a preliminary linear feature transformation. To do this, we use two well-known linear transformations: Max-Min and Z-score. Since both of these linear transformations represent popular normalization methods, what is actually done is post-processing of the normalized data. As shown in Chap. 4, linear normalization preserves value dispositions and skewness in the data, which is the basis of this approach. Both linear transformations make it possible to match different measurement scales. Max-Min normalization maps any dataset to the range [0, 1] and therefore any dataset will be treated the same in the future, regardless of measurement scales. The values 0 and 1 are intuitive and are perceived as the worst and best values. Intermediate values are interpreted as fractions of the best. When attributes are measured on different scales, or on a common scale with widely varying ranges, they can be converted to Z-scores to facilitate comparison. The absolute value of Z-scores is the distance between the observed x value and the population mean in units of standard deviation, which is very important in multivariate normalization. An example of post-processing of normalized data can be quadratic normalization known from numerous publications [3], which actually represents a two-step procedure: (1) normalization by the Max-method, and (2) data transformation using a power function:

9.3

Transformation of Normalized Data: Post-processing of Data

r ij =

aij amax j

177

2

= ½MaxðaÞ2 :

ð9:9Þ

In accordance with this, one can generalize the quadratic normalization in the form: r ij = Norm aij

2

,

ð9:10Þ

where Norm() is one of the linear normalization methods. To eliminate ambiguity in terminology, it is correct to use the term data postprocessing and use the following general notation: rij = f Norm aij ,

ð9:11Þ

where f: X → [0, 1] is a strictly monotone function.

9.3.1

Post-processing with Max-Min Normalization

In the pre-transformation step using Max-Min normalization (r = Max-Min(a)) the data is mapped to the interval [0, 1]. If there are outliers or unwanted skewness in the data, linear normalization will leave both the outliers and skewness intact. Apply post-processing of the normalized values to remove the skewness without changing the normalized range ( f: [0, 1] → [0, 1]). Such a transformation is possible using various functions. Figure 9.4 shows several different options for transformation functions that meet the requirements of post-processing data. When post-processing the data on the interval [0, 1], it must be taken into account that the transformation of the convex functional profile (for example, r0.5) compresses the data to 1, and the transformation of the concave functional profile (for example, r2) compresses the data to zero. Diagonal-equilibrium position. The corresponding areas of compression-tension are shown in Fig. 9.4 on the right as a dashed map. Shown in Fig. 9.4 transformations allow you to change the coefficient of skewness of normalized data from 1.6 to 0. However, in all cases of transformation, points 0 and 1 are problematic—the fixed points of the transformation. If the endpoints are identified as outliers, then the proposed transformation functions do not eliminate these outliers. Elimination of the problem is possible if, at the stage of normalization, the data is displayed in the interval (0, 1). To do this, in the Max-Min normalization method, it is necessary to increase the data range (denominator), if using the concept of an ideal positive aj+ (aj+> ajmax) and an ideal negative value aj- (aj- < ajmin). The normalization formula is converted to the form:

178

9

Non-linear Multivariate Normalization Methods

Fig. 9.4 Various options for transformation functions in post-processing data

Max‐Min aij =

aij - aj: aþ j - aj

ð9:12Þ

The choice of a function for data transformation requires a preliminary analysis of the distribution of attribute values and must comply with the data structure. As in the case of data pre-processing, such selection should be carried out differentially for various attributes (taking into account distributions) and can be performed according to the criterion of minimizing asymmetry in the data. Presented in Fig. 9.4 different variants of functions represent one-way transformations of data asymmetry. For decision-making problems in the case of “benefit” attributes, the processing of the right-handed range of values is relevant, since it is the values close to 1 that determine the final rating of alternatives. The processing of the left-side area can be ignored. In the case of “cost” attributes, the processing of the left-handed value area is relevant. Given the symmetry of the values, it is possible to invert the data using the ReS-algorithm and, instead of transforming the left-hand side of the values, perform the transformation of the right-hand side. Of practical interest are also transformations that allow one to transform outliers over the entire range of values [0, 1]. Below are various options for transformation functions and discussion of possible options for their practical use.

9.3

Transformation of Normalized Data: Post-processing of Data

179

1. Piecewise Linear Function (PwL) [4]:

f rij =

0, r ij ≤ pj r ij - pj , p < r ij ≤ qj : qj - pj j 1, r ij > qj

ð9:13Þ

The parameters pj, qj (0 ≤ pj < qj ≤ 1), are set for each jth criterion, taking into account the preferences of the decision maker (non-formalized part of the procedure). The pj value determines the limit of weakening the influence of alternatives with attributes worse than pj. Similarly, qj defines the bound on the amplification of the influence of alternatives that have better attributes than qj. The “attenuationamplification” parameters of influence pj, qj are set for each criterion in fractions of unity (or percentages), which is easier to determine intuitively, in contrast, if they are determined for natural attribute values. In this way, it is possible to artificially reduce or increase somewhat the contribution of “weak” or “strong” alternatives to the efficiency indicator during aggregation, which will facilitate the ranking of alternatives in the future. And for the range of acceptable attribute values ( pj < rij < qj), normalization is performed on the basis of a linear transformation while maintaining relative proportions. The function defined by formula (9.13) sets the limit values for amplification and attenuation of values on three intervals from the domain of definition and is a special case. The general piecewise linear transformation has the form:

f r ij =

py  r , r ≤ px px ij ij qy - py  r ij - px þ py , px < r ij ≤ qx : qx - px 1 - qy  r ij - qx þ qy , r ij > qx 1 - qx

ð9:14Þ

Function (9.14) is defined by specifying two points in the domain with coordinates ( px, py) and (qx, qy) for each attribute. Figure 9.5 is a graphical illustration of the transformation of normalized values based on a piecewise linear function. Compression-expansion of the data occurs on sections of a convex (slope of the straight line 1— stretching) profile of a piecewise linear function. The corresponding areas are shown in Fig. 9.5 on the right as a bar map.

180

9

Non-linear Multivariate Normalization Methods

Fig. 9.5 Various transformation options for post-processing data based on piecewise linear functions

2. S-shaped Spline Function (SSp): 0, r ij ≤ pj f r ij =

2

rij - pj 2 , pj qj - pj

1-2 

qj - rij qj - pj

pj þ qj 2 : 2 pj þ qj , < r ij ≤ qj 2 < r ij ≤

ð9:15Þ

1, r ij > qj 0 ≤ p j ≤ qj ≤ 1 Represents the smooth, spline-based counterpart of the PwL function. Attenuation-enhancement parameters of the influence of attributes ( pj, qj) are set similarly. 3. Gaussian-based Function (GBF):

f rij = 1 - exp sj = std r ij

r ij 2 , j = 1, . . . , n: 2  s2j

ð9:16Þ

9.3

Transformation of Normalized Data: Post-processing of Data

181

Represents the transformation of a convex functional profile in the vicinity of 1 and the transformation of a concave functional profile in the vicinity of 0. Such a function allows you to increase the contribution of “strong” alternatives and reduce the contribution of “weak” alternatives to the efficiency indicator. 4. Sigmoid Function (Sgm) or Logistic Function [5]: f r ij =

1 1þe

- k j ðr ij - pj Þ

:

ð9:17Þ

kj—slope factor, pj—point of symmetry center, pj: f( pj) = 0.5. A sigmoid function is a mathematical function that has a characteristic S-curve or sigmoid curve. To specify this function in the region [0, 1], it is necessary to determine the center of symmetry pj and specify the compression factor kj along the argument axis. The center of symmetry corresponds to the average value of the jth attribute of the considered set of alternatives. The compression ratio of the sigmoid function determines the degree of compression-expansion of data during transformation. These parameters of the Sgm() function allow you to fine-tune the function for efficient skewness transformation. With k = 12, 99.97% of the transformed standardized values from the normal distribution will belong to the interval [0, 1]. Along with the main function, inverse functions can also be used for transformation. For example, for the SSp-function, the inverse function is: pj , r ij = 0 f r ij =

qj - pj

rijj 0,5 2

q j - qj - p j 

pj þ qj 2 : 0,5 p j þ qj , < r ij < 1 2

þ pj , 0 < r ij ≤ 1 - rij 2

ð9:18Þ

1, r ij = 1 0 ≤ pj ≤ qj ≤ 1 For the Sgm-function, the inverse function has the form: f rij =

r ij 1  ln 1 - r ij kj

þ pj :

ð9:19Þ

Figure 9.6 is a graphical illustration of the transformation of normalized values based on various “S”-shaped functions. Data compression occurs in the areas of the convex profile of the function, and stretching occurs in the areas of the concave profile of the function. The corresponding areas are shown in Fig. 9.6 on the right as a bar map.

182

9

Non-linear Multivariate Normalization Methods

Fig. 9.6 Various transformation options for post-processing data based on S-shaped functions

9.3.2

Post-processing with Z-Score Normalization

In practical applications, any set of data Xi with mean x* and standard deviations can be converted to another set with mean 0 and standard deviation 1. The converted Z-values will be directly expressed as deviations of the original values from the mean, measured in standard deviation units. The fact that Z-scores belong to the standard normal distribution N(0, 1) makes it possible to use Z-scores to compare heterogeneous values of primary measurements. Most statistical methods are based on the assumption that the distribution of data is normal, so the use of Z-scores in conjunction with the transformation to normality greatly expands the possibilities for further analysis and research. For multivariate data, standard scores or standardized variables are calculated as: zij =

aij - aj , sj

ð9:20Þ

where aj* and sj are the mean and standard deviation of the values of the jth attribute, respectively. Standardization, according to Eq. (4.7) in Table 4.1, is one of the variants of linear normalization with a displacement. Data outliers can be identified using the k-sigma rule. For the case of a normally distributed data, the interval (-3σ, 3σ) contains 99.7% of the observations, and outside the 5σ interval there are six outliers per million. The k-sigma rule allows you

9.3

Transformation of Normalized Data: Post-processing of Data

183

Fig. 9.7 Different transformation variants for Z-score

to reduce the data as follows: all Z-scores outside the k-sigma interval are taken equal:

zij = sign zij  k, if zij > k,

ð9:21Þ

where the function sign(z) returns the sign of the value z. Z-standardization is a linear transformation, so outliers and skewness will remain unchanged (unless data reduction is applied). Let’s post-process the Z-scores to eliminate the skewness f: Z → (0, 1). Such a transformation is possible using various functions. Figure 9.7 shows several different variants of S-shaped functions [5] that meet the requirements of post-processing data. Sigmoid normalization is appropriate when there are outliers in the data that cannot be excluded from consideration. This prevents the most frequently occurring values from being compressed into nearly identical values without losing the ability to represent very large outlier values. In the region of the convex profile of S-shaped functions, such a transformation compresses the data to 1, and in the region of the concave profile, it compresses the data to zero. The corresponding areas are shown in Fig. 9.7 on the right for the error function in the form of a bar map. The general sequence of transformation of Z-scores is as follows: r ij = f zij = f Z aij :

ð9:22Þ

184

9

Non-linear Multivariate Normalization Methods

Below are the most commonly used S-shaped functions, shifted and scaled to the range (0, 1): 1. Error function [5]: 1  1 þ erf zij , 2

r ij =

ð9:23Þ

z

erf zij

2 = π

e-t

2

=2

ð9:24Þ

dt,

0

2. Normal cumulative distribution function NormCDF(x) [6]:

r ij = Normcdf ðxÞ =

zij 1  1 - erf - p 2 2

,

ð9:25Þ

3. The logistic function or logistic curve is the usual S-shaped curve (sigmoid curve) with the equation [5]: rij = =

1 , 1 þ e - zij

ð9:26Þ

4. Hyperbolic tangent (shifted and scaled version of the logistic function, above) [5]:

r ij =

1  1 þ tanh Z aij 2

=

1 ez - e - z 1 :  1þ z = 2 e þ e-z 1 þ e - 2z

ð9:27Þ

The above transformations belong to the “SoftMax-normalization” class, which are widely used in machine learning [7]. Such a term applies to transformations for which the normalized values gently approach their maximum and minimum values, but never reach them. The scaling of the above functions (compression-expansion along the X-axis) is achieved by introducing the coefficient k into the argument: r = f(kz). For k > 1— compression, 0 < k < 1—stretching. Compression of the function profile along the X-axis leads to greater data compression in the range of end values 0 and 1. An example of scaling S-shaped functions is shown in Fig. 9.8 for the hyperbolic tangent. Along with the main function, as in the case of post-processing based on Max-Min, reverse functions can also be used for transformation. Figure 9.9 shows

9.3

Transformation of Normalized Data: Post-processing of Data

185

Fig. 9.8 Normalization using hyperbolic tangent

Fig. 9.9 Various variants of inverse transformation for Z-score

various options for transformations based on inverse S-shaped functions given by formulas (9.23)–(9.27). In the area of the concave profile of the inverse S-shaped function, such a transformation stretches the data from 1 (along the Y-axis), and in the area of the convex profile, it stretches the data from zero. The area of average values is

186

9 Non-linear Multivariate Normalization Methods

compressed. The corresponding areas are shown for the error function in Fig. 9.7 on the right as a bar map. In accordance with the graphs of S-shaped functions in Fig. 9.7, inverse functions have domain (0, 1) and range [-σ, σ]. Therefore, it is necessary to perform scaling and displacement of the domain of definition and range of values. Transformation functions based on inverse functions for various variants of S-shaped functions are obtained in several steps: 1. We perform the transformation of Z-scores into the region [0, 1] by applying Max-Min normalization: xij = Мах - Min zij ,

ð9:28Þ

2. Redefine the values of the ends of the segment [0, 1] in terms of the values of the S-shaped function at the points ±σ: x0 = f(σ). For example, for the logistic function: x0 = 1=ð1 þ e‐σ Þ, σ = 3, x0 ð3Þ = 0:9933; if xij = 0 ) xij = 1 - x0 , if xij = 1 ) xij = x0 ,

) xij 2 ð0; 1Þ,

ð9:29Þ

3. Calculate the values using the inverse function

yij = log

xij 1 - xij

2 ð- σ; σ Þ,

ð9:30Þ

4. Scale the obtained values r ij = Max - Min yij : After scaling, the normalized values belong to the region [0, 1].

ð9:31Þ

9.3

Transformation of Normalized Data: Post-processing of Data

9.3.3

187

Weighted Product Model and Post-processing of Normalized Values

A number of authors, for example [8, 9], use the following alternative version of the main formula of the Weighted Product Method (WPM) of aggregation the attributes in the form: n

Qi =

r ij

wj

ð9:32Þ

,

j=1

where rij are normalized attribute values, wj are criteria weights. Taking the logarithm of equality (9.32) transforms it into: n

wj  log a r ij :

T i = log a ðQi Þ =

ð9:33Þ

j=1

Equality (9.33) is a variant of the WSM (Weighted Sum Method) aggregation method for logarithmic data. Taking the logarithm of the normalized attribute values from the interval (0, 1] transforms the data into the interval (-1, 0]. The expansion of values close to 0 occurs exponentially, and, therefore, these values can significantly affect the rating. A simple example of ordering two alternatives according to two features for WSM and WPM aggregation methods demonstrates this feature (Table 9.3): Alternative A1 takes precedence over alternative A2 for the WSM aggregation method, and vice versa, Alternative A2 takes precedence over Alternative A1 for the WPM aggregation method. Such a result is possible when the value of only one attribute is close to 0. If in the formula (9.32) there is at least one factor close to zero, this will lead to a decrease in the efficiency indicator below this value, and, despite the high values of other attributes, the alternative will have a low rating. The above considerations indicate the limitations of using the version of WPM in the form (9.33), since the ranking results have a strong dependence (sensitivity) on the attribute values of individual alternatives.

Table 9.3 Changes in rating when using the logarithmic transformation

C1 a-3 A1 a-2 A2 w 1 Preference:

C2 0.9 0.8 1

Q, (WSM) 0.9+ a-3 0.8+ a-2 Q1> Q2 A1

T, (WPM) -3+loga0.9 -2+loga0.8 T2 > T1 A2

188

9

Non-linear Multivariate Normalization Methods

9.4

Inversion of Normalized Values and Matching the Areas of Normalized Values of Different Criteria

For cost criteria (STB-criteria), a non-linear transformation is constructed in a similar way using a monotonically decreasing function. However, this is not necessary, since normalized values can be easily obtained by inverting LTB values using the ReS-algorithm [10, 11]. The non-linear normalization procedures (single or multi-step) described in this chapter map natural values to the interval [0, 1] or (0, 1), which creates certain limitations when aggregating normalized values. Is it possible to change the range of normalized values? The answer is yes, and this is achieved using the IZ transform [11, 12], the methodology of which is described above in Chap. 7. Multivariate normalization, regardless of the use of different methods for individual features, requires matching the areas of normalized values, which is achieved by using the IZ transformation. To do this, for all criteria, it is necessary to determine a conditionally common scale [I, Z]. This procedure is described in detail in Chap. 7. IZ transformation displays the normalized values in a given interval, which allows you to harmonize the measurement scales of the normalized values of various attributes. One of the main properties of the IZ transformation is the preservation of the disposition of normalized values. This property also holds in a situation where the original data is transformed using non-linear normalization procedures. Figure 9.10 shows the IZ transformation of the normalized values into the interval [0.33, 0.75]. Non-linear normalization is performed using hyperbolic tangent and logistic function. Next, the subsequent IZ transformation of the normalized values into the region [0.33, 0.75] was performed.

Fig. 9.10 Non-linear normalization of Z-scores (a) and subsequent IZ transformation (b) of normalized values into the region [0.33, 0.75]

9.5

9.5

Numerical Example of Data Pre-processing

189

Numerical Example of Data Pre-processing

This section presents a numerical example of data pre-processing performed according to the scheme described above. Consider a decision-making problem of dimension [8×5]—8 alternatives and 5 criteria. For each of the features, we will generate data according to the given distribution laws for each feature. Let the first feature have a normal distribution, the second feature a lognormal distribution, the third feature a log-logistic normal distribution, the fourth feature an exponential distribution, and the fifth feature a Poisson distribution. The distribution parameters and the functions used to generate random variables are given below: μ=20, σ=6, λ=200, τ=2 xi=rnd() ai1 = icdf(‘Normal’, x, μ, σ), ai2 = icdf(‘Lognormal’, x, log(μ), log(σ)), ai3 = icdf(‘LogLogistic’, x, log(μ), log(σ)), ai4 = icdf(‘Exponential’, x, τ), ai5 = icdf(‘Poisson’, x, λ), где icdf()—inverse cumulative distribution function distribution family specified by “name,” (Normal, Lognormal, Loglogistic, Exponential, Poisson) evaluated at the probability values in x, and contain the parameter values for the distribution. Only the first of the five selected distributions has symmetry, which determines the expediency of data pre-processing. If x is lognormally distributed with parameters μ and σ, then log(x) is normally distributed with mean μ and standard deviation σ. This defines a logarithmic skew. The exponential distribution, by definition, has a skewness of 2. The Poisson distribution is suitable for applications that involve counting the number of times a random event occurs in a given amount of time. The skewness of the Poisson distribution is 1/√λ. In this way, we generate a decision matrix with a given distribution of features. Let’s perform a statistical experiment, which consists in estimating the mean value of the skewness and the nonparametric skew of the transformed data (after pre-processing the decision matrix). Subsequent linear normalization will not change the skewness, as shown above. We will use three different transformations—power f1 = a0.5 and f2 = a0.3, and logarithmic f3 = log(a). Power functions “compress” the data less strongly than the logarithmic transformation, and the choice of the exponent allows you to control the degree of data compression, which is reflected in the asymmetry. The use of a logarithmic transformation can be justified if we want to equalize the contributions of alternatives with very different attributes. Given the strong data transformation when using logarithmic scales, it is appropriate to use a power function. Table 9.4 presents a numerical example of the degree of data compression for logarithmic and power transformations.

190

9

Non-linear Multivariate Normalization Methods

Table 9.4 Changes in the proportions of normalized values during logarithmic and power transformation of the initial data # 1 2 x2/x1

x 10 100 10

Table 9.5 Changes in proportions of normalized values during logarithmic transformation of the initial data

F3(x)= x0.5 3.2 10 3.1

a a0.5 a0.3 log(a) a a0.5 a0.3 log(a) a a0.5 a0.3 log(a) a a0.5 a0.3 log(a)

f2(x)= x0.3 1.99 3.98 2.1

f1(x)=log10(x) 1 2 2

C1 C2 C3 C4 Skewness coefficients, γ 0.010 1.805 2.218 1.033 -0.252 1.207 1.743 0.379 -0.358 0.806 1.309 0.034 -0.513 0.010 0.014 -0.532 Nonparametric skew, Sk -0.003 0.373 0.410 0.233 -0.057 0.255 0.345 0.078 -0.078 0.168 0.260 0.001 -0.107 -0.000 -0.004 -0.117 Disposition between first and second points 19.1 52.6 67.6 33.4 16.4 37.8 51.8 23.1 15.3 30.4 41.1 19.9 14.6 19.3 21.2 13.6 Disposition between second and third points 13.7 21.0 18.5 19.6 12.3 19.3 19.7 15.8 11.7 17.6 18.6 13.8 10.4 13.7 13.2 10.7

C5 0.047 -0.010 -0.032 -0.066 0.004 -0.008 -0.013 -0.020 19.6 19.9 19.6 19.2 13.9 13.6 13.5 13.3

Following the principle of additive significance of the attributes of alternatives, the significance of the second alternative (the contribution of its attribute) decreased from 10 to 2 times after data transformation, compared with the significance of the first alternative, i.e. 5 times. With the subsequent linear normalization of the transformed values, these proportions do not change. Table 9.5 shows the average values of the skewness and nonparametric skew of the transformed data, calculated from the results of 1000 repeated random generations of the decision matrix with a given feature distribution law. The results show that after transformation, the skewness coefficients for attributes with outliers decrease to zero (right skew) and become negative (left skew) under strong compression. For data without outliers (C1), the skewness increases (modulo). Accordingly, data transformation (pre-processing) must be performed differentially for each feature. The choice of the type of transformation function in the class of admissible functions (determined in accordance with the objectives of the

9.6

Numerical Example of Data Post-processing

191

transformation) for each feature can be performed according to the condition of minimizing the asymmetry in the data. For the presented example, for the first attribute, it is not necessary to perform a transformation, for the second and third attributes, minimization of asymmetry (over a discrete set of admissible functions) is achieved using a logarithmic transformation, for the fourth and fifth attributes, it is optimal to use a power transformation of the form a0.3 and a0.5 respectively.

9.6

Numerical Example of Data Post-processing

This section presents a numerical example of the transformation of Z-scores, performed according to the scheme described above for the decision matrix defined above by Eq. (9.8):

D1 = aij =

12

134

3200

21 16

205 154

3100 1400

65 120

900 320

1200 1100

:

The first two attributes defined in the decision matrix [5×3] have significant outliers in the data (one order of magnitude). The consequence of this is a high value of the asymmetry coefficient. The third attribute is characterized by grouping data around the smallest and largest values. Four different functions were used for transformation: Erf(k∙Z)—error function, tanh(k∙Z), NormCdf(k∙Z), and Logistic(k∙Z), which were described above, as well as four inverses to them functions. The choice of the optimal transformation option is made on a discrete set of “direct” functions with a fixed set of scaling factors k = 0.5, 1.0, 1.5, 2.0, 2.5 in two versions: 1. by the criterion of minimizing the asymmetry coefficient, 2. according to the criterion of minimizing the skew coefficient. Similarly, the choice of the optimal variant is made on a discrete set of “inverse” functions. Thus, four different solutions (non-linear transformations of Z-scores) were calculated. Graphical results are shown in Fig. 9.11. The abscissa shows transformations for three attributes in five variants. The zero option represents the transformation of the Z-scores of each attribute by the Max-Min method. Numbers 1, 2, 3, 4 along the abscissa axis correspond to one of the functions selected for transformation (see the legend). The prefix “i” before the number means the use of the inverse function. For example, i1 means applying the inverse logistic function. The values of the scaling factors for the selected functions

192

9

Non-linear Multivariate Normalization Methods

Fig. 9.11 Non-linear transformation 1 of Z-scores

are determined by the optimality criterion and are shown on the graph for each of the transformations. Figure 9.11 additionally shows the values of the coefficients of asymmetry and skew. The optimal variant (minimum value of the skewness coefficients) for the transformation of the values of the second attribute in the class of “direct” functions is the error function with the coefficient k = 2. In the class of inverse functions—the inverse logistic function (k = 0.5). The value of data skewness decreased from 1.95 to 0.46. A graphical illustration of the non-linear normalization of the values of the second attribute is shown in Fig. 9.12. The results for other attributes are interpreted similarly. Normalized values using non-linear transformations and Max-Min normalization are depicted on the right scale of the Y-axis.

9.7

Conclusions

The presence of non-typical values in a multivariate sample of data for individual attributes is one of the arguments for using non-linear transformations in normalization. The techniques described in this chapter allow you to increase or decrease the influence of individual observations or a group of observations, including changing the skewness in the data. The unification of the procedure when using various non-linear transformations is achieved by using successive normalization. Natural values are first converted using the Max-Min or Z-score methods. Next, a strictly monotonic non-linear transformation is applied that maps the data to [0, 1]. The choice of a non-linear transformation is multivariate, and the choice of function parameters is determined by the decision

References

193

Fig. 9.12 Non-linear transformation 2 of Z-scores

maker by setting certain pairs of values. For the STB target, normalized values are obtained by inverting the LTB values using the ReS-algorithm. Multivariate normalization, regardless of the use of different methods to individual features, requires matching the areas of normalized values, which is achieved by using IZ and MS transformations.

References 1. Maronna, R. A., Martin, D. R., & Yohai, V. J. (2006). Robust statistics: Theory and methods (2nd ed.). Wiley. 2. Zavadskas, E. K., & Turskis, Z. (2008). A new logarithmic normalization method in games theory. Informatica, 19(2), 303–314. 3. Peldschus, F., Vaigauskas, E., & Zavadskas, E. K. (1983). Technologische entscheidungen bei der berücksichtigung mehrerer ziehle. Bauplanung-Bautechnik, 37, 173–175. 4. Piecewise linear function. (2022, May 28). In Wikipedia. https://en.wikipedia.org/wiki/ Piecewise_linear_function 5. Sigmoid function. (2022, May 28). In Wikipedia. https://en.wikipedia.org/wiki/Sigmoid_ function 6. Cumulative distribution function. (2022, May 28). In Wikipedia. https://en.wikipedia.org/wiki/ Cumulative_distribution_function 7. Weijun, L., & Zhenyu, L. (2011). A method of SVM with Normalization in Intrusion Detection. Procedia Environmental Sciences, 11, 256–262. 8. Karande, P., Zavadskas, E. K., & Chakraborty, S. A. (2016). A study on the ranking performance of some MCDM methods for industrial robot selection problems. International Journal

194

9

Non-linear Multivariate Normalization Methods

of Industrial Engineering Computations, 7, 399–422. https://www.growingscience.com/ijiec/ Vol7/IJIEC_2016_1.pdf 9. Chakraborty, S., & Zavadskas, E. K. (2014). Applications of WASPAS method in manufacturing decision making. Informatica, 25(1), 1–20. https://doi.org/10.15388/Informatica.2014.01 10. Mukhametzyanov, I. Z. (2020). ReS-algorithm for converting normalized values of cost criteria into benefit criteria in MCDM tasks. International Journal of Information Technology and Decision Making, 19(5), 1389–1423. https://doi.org/10.1142/S0219622020500327 11. Mukhametzyanov, I. Z. (2023). On the conformity of scales of multidimensional normalization: An application for the problems of decision making. Decision Making: Applications in Management and Engineering. https://doi.org/10.31181/dmame05012023i 12. Mukhametzyanov, I. Z. (2023). Elimination of the domain’s displacement of the normalized values in MCDM tasks: The IZ-method. International Journal of Information Technology and Decision Making. https://doi.org/10.1142/S0219622023500037

Chapter 10

Normalization for the Case “Nominal Value the Best”

Abstract This chapter provides an overview and analysis of various target criteria normalization procedures. For rank-based MCDM models, the normalization of the indicators of all criteria—benefit, cost, and target criteria—should be consistent with each other in order to exclude the priority of the normalized values of individual criteria during their further aggregation. If the Sum method is used to normalize the benefit attributes, then the inverse iSum (ReS-algorithm) should be applied for the cost criteria, and the agreed tSum should be applied for the target attributes. This eliminates the priority of the normalized values of individual criteria in their further aggregation. This ensures an equal contribution of the normalized values of each of the criteria to the integral performance indicator. The author proposes a generalization of normalization methods for target criteria, which ensures agreement with the main linear methods for normalizing benefit attributes and cost attributes. Keywords Multivariate normalization · Target criteria · Linear and non-linear tnormalization methods · Desirability functions

10.1

Target Criteria and Target-Based Normalization

The target attribute value can be of three types: 1. Larger-the-better (LTB)—a larger value is better and smaller values are undesirable. The ideal target value is infinity. Examples for this type are strength, life, efficiency, etc. 2. Smaller-the-better (STB)—a smaller value is better and higher values are undesirable, such as vehicle emissions or fuel consumption, wear, shrinkage, pollution, deterioration, etc. 3. Nominal-the-best (NTB)—the nominal value is best because it is the one that satisfies the customer’s need. A characteristic with a specific target value, such as consistency, dimensions, viscosity, clearance, etc.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3_10

195

196

10

Normalization for the Case “Nominal Value the Best”

In the first case, the type of criterion is designated as “profit” or “benefit” criteria. The STB target value is defined as cost criteria (“cost” criteria). For the target value of NTB, there is no well-established term in the literature. The term target criteria is often used, although all criteria have a purpose. In this book, for the target value of NTB, we will refer to such criteria as target criteria or t-criteria. For selection problems, the target attribute value is either the maximum of the set of available alternatives: ajmax = maxiaij (benefit attributes), or the minimum ajmin = miniaij (cost attributes), or some intermediate ajt value: ajmin < ajt < ajmax, which is defined as the target (target attributes). For selection problems, if the target value of the attribute is not achieved for any of the alternatives, then it is advisable to take the value of one of the alternatives closest to the target value as ajt. In such a case, the maximum (or minimum) normalized value is the attribute value of one of the alternatives. This will not contradict the formulation of the problem, in which one of the alternatives for each attribute is preferable. The use of target values greater than the maximum or less than the minimum is important in optimization problems when improving a certain criterion due to factors without restrictions. When aggregating normalized attributes using additive methods, it is necessary that the direction of the criteria is the same. Either for all attributes, the best value should be the largest (the direction of maximization) or the smallest (minimization). In the first case, the first rank alternative has the highest value of the efficiency indicator (maxQi). The ranking of alternatives is performed in descending order of the efficiency indicator. In the second case, the first rank alternative has the lowest value of the efficiency index (minQi). The ranking of alternatives is performed in the ascending order of efficiency. Because of this, for selection problems on a finite set of alternatives, relevant is the matching of the values of the benefit (or cost) attributes and the attributes of the target criteria. This is achieved by inverting the target from minimum to maximum or vice versa. Target inversion is performed in the normalization step by inverting values using the ReS-algorithm, detailed in Chap. 5. The choice of direction for maximizing or minimizing the performance indicator does not affect the ranking result. As a rule, such a choice is determined by the ratio of the number of criteria for which “less is better” or “more is better,” following the principle of reducing the number of algebraic data transformations. The attribute values for the target criteria are normalized already taking into account the choice made. In the case when the target nominal value of the attribute is the best—some intermediate value between the highest and the lowest: aj t : aj min < aj t < aj max ,

ð10:1Þ

10.1

Target Criteria and Target-Based Normalization

197

Fig. 10.1 Configuration of normalized value domains for various linear multivariate normalization methods

normalization is performed in such a way that the normalized target value is the largest for the direction of maximization or the smallest for the direction of minimization: max Qi : r tj ≥ r ij , 8i,

ð10:2Þ

min Qi : r tj ≤ r ij , 8i,

ð10:3Þ

Obviously, the normalized target value of an attribute is a local extremum on the set of normalized attribute values. In the processing of multidimensional data, a linear transformation of all feature values is most often used: r ij =

aij - aj , kj

ð10:4Þ

where aij, rij are the natural and normalized values of the jth attribute of the ith alternative, respectively, aj* and kj are some pre-assigned numbers that determine, respectively, the shift of the normalized values and the degree of their compression. All quantities on the right side of Eq. (10.4) have the same dimension, which provides a conversion to dimensionless values. Additionally, a value not less than in the numerator is used as kj, which ensures that the natural values of the attributes of alternatives are mapped to some fixed subdomain of the values of the set of the segment [0, 1]. The most commonly used normalizations: Max, Sum, Vec, Max-Min, dSum, Z-score are described in detail in Chap. 4. The characteristic features of multidimensional normalization (even within the framework of one normalization method) are obvious [1–3] (Fig. 10.1):

198

10

Normalization for the Case “Nominal Value the Best”

1. different data compression for different attributes, 2. displacement of normalized value domains for various attributes. These are the main factors influencing the subsequent rating. It is these effects that need to be adjusted (coordinated) when choosing a normalization method, choosing an inversion method, and performing additional data transformation. As a consequence, you cannot apply different normalization methods to different attributes. So, for example, if we applied Sum normalization to benefit attributes, we would apply the same method to both cost attributes and t-attributes. Unfortunately, a significant number of works have been and are being carried out without due attention to these problems.

10.2

Review of Target Normalization Methods

This section presents various options for target-based normalization, focused on the simultaneous processing of all types of criteria. One of the target-based normalization options in the case of maximizing the performance indicator of alternatives, called the ideal-linear method, has the form [4]:

r ij =

min aij , atj i

max aij , atj i

=

aij , aij < atj atj atj , a ≥ atj aij ij

,

ð10:5Þ

The normalization formula represents the analog of the Max normalization on the interval [ajmin, ajt) and inverse iMax2 (inverse Max) normalization on the interval [ajt, ajmin]. Therefore, some of the values are normalized according to the linear algorithm, and the values exceeding ajt are normalized using a non-linear (hyperbolic) transformation that changes the dispositions of natural values. The compression ratio (slope of the straight line) for this method does not match any of the linear normalization methods. This means that the method is not consistent with the benefit and cost attribute normalization methods. The next variant of the target-based normalization (nominal-is-best method) has the form [5, 6]:

r ij = 1 -

aij - atj amax - amin j j

,

ð10:6Þ

The normalization formula (10.6) also represents an analog of the Max-Min normalization method on the interval [ajmin, ajt) and inverse normalization iMaxMin on the interval [ajt, ajmin]. The range of normalized values is [rjmin, 1].

10.2

Review of Target Normalization Methods

199

The third variant of target-based normalization in the case of maximizing the performance indicator of alternatives has the form [7]:

r ij = 1 -

aij - atj max aij , atj - min aij , atj

ð10:7Þ

,

Normalization formula (10.7) represents an analog of the Max-Min normalization method on the interval [ajmin, ajt) and inverse normalization iMax-Min on the interval [ajt, ajmin]. The range of normalized values is [0, 1]. However, the stretchcontraction ratio of the natural values at these two intervals is generally different and proportional to the length of the gap. Therefore, the proportions of normalized data of the same attribute to the left and to the right of the target value ajt will differ and this may affect the final rating. The compression ratio for this method also does not match any of the linear normalization methods. This means that the method is not consistent with normalization of benefit and cost attributes. Unlike the variant of formula (10.6), the data stretch-compression ratio is fixed. Therefore, the proportions of normalized data of the same attribute do not change in absolute value and formula (10.6) is preferable. Figure 10.2 shows a graphical illustration of target-based normalization by Eqs. (10.5)–(10.7) for the case of maximizing the performance indicator of alternatives (solid line) and for the case of minimizing the performance indicator of alternatives (dashed line). The latter was obtained by inverting the values using the ReS-algorithm by Eq. (5.12) from Chap. 5. Initial data: a = (448, 478, 564, 580, 610, 615, 620, 667) correspond to the third attribute according to the illustration in Fig. 10.1. The target nominal score is 615. The fourth and fifth plots in Fig. 10.2 represent the non-linear transformation of target-based normalization methods (Eqs. 10.6 and 10.7) [8]:

rij = exp -

r ij = exp -

aij - atj amax - amin j j

ð10:8Þ

,

aij - atj max aij , atj - min aij , atj

,

ð10:9Þ

For all the cases presented in the target-based normalization formulas there is a module-function, which provides a break in the normalization line at the target point ajt and determines the applicability of the formulas for all types of criteria. Only formula (10.6) produces normalized values, the compression of which corresponds to the compression of the benefit criteria in the Max-Min method. For this reason, we do not recommend applying transformation (Eq. 10.5) in conjunction with the Max normalization method for benefit criteria, we do not recommend

10

Fig. 10.2 Illustration of different variants of target-based normalization methods

200 Normalization for the Case “Nominal Value the Best”

10.2

Review of Target Normalization Methods

201

applying transformations (Eqs. 10.6 and 10.7) in conjunction with the Max-Min normalization method for benefit criteria. Despite the fact that the target-based normalization formulas are similar to the Max and Max-Min normalization methods, the degree of data compression is different and these are methods of different classes. In addition, these methods cannot be used in conjunction with methods such as Sum, Vec, dSum, or Z-score. The objective normalization methods presented can only be used in conjunction with two of the linear methods, Max and Max-Min. Moreover, for Eqs. (10.5) and (10.6) the range of normalized values is not consistent with the range of normalized values of profit criteria. And for Eq. (10.7) the scale compression ratio (slope) is significantly different, which can also affect the final rating. It may seem that the solution for target normalization has been reached. However, in the case of multivariate normalization, the need to aggregate the normalized values of various features requires matching the areas of normalized values of these features. It is necessary to perform normalization (including target normalization) of individual features in such a way as to exclude the priority of the contribution of individual features to the performance indicator of alternatives. We will show that it is possible to perform linear data transformations (stretchcompression and displacement) while maintaining the dispositions of values within one criterion. This is achieved using the IZ transformation (see Chap. 7) combined (if necessary) with data inversion using the ReS-algorithm. Such a transformation is an invariant shape transformation. For example, one of the variants of such a transformation for Eq. (10.7) uses the shift and inversion of the normalized values and transforms the data from the interval [rjmin, 1] to the interval [0, rjmax]:

vij = ReS 1 - exp -

aij - atj

,

max aij , atj - min aij , atj

ð10:10Þ

As a result, the normalized values (graphic form of Fig. 10.2d) are converted to the form shown in Fig. 10.3. Some set of different linear transformations for objective normalization (Eqs. 10.6 and 10.8) based on IZ and ReS transformations and illustrations for them are presented in Fig. 10.4. In Fig. 10.4a–d, y stands for:

yij = 1 - r ij = 1 - exp -

aij - atj amax - amin j j

Next, the IZ transformation of the view is applied:

:

ð10:11Þ

202

10

Normalization for the Case “Nominal Value the Best”

Fig. 10.3 Illustration of ReS transformation by Eq. (10.10) for target normalization (Eq. 10.8)

uij =

yij - ymin j ymax - ymin j j

 ðZ - IÞ þ I, 8i = 1, m; 8j = 1, n,

ð10:12Þ

for different intervals [I, Z] specified by the expert. IZ transformation allows you to purposefully change both the range of the domain of normalized values and its position on the interval [0, 1], while preserving the dispositions of values (preserving the shape). Another group of transformations in Fig. 10.4e–h is represented by a simple shift of the normalized value range in the interval [0, 1]. In some cases, this is quite enough to eliminate the priority of the contribution of individual features to the performance indicator of alternatives. Thus, the methods of target normalization presented in the literature are limited and, for multivariate methods, additionally require agreement with the normalization of other features.

10.3

Generalization of Normalization Methods of Target Criteria for Linear Case

The author proposes a generalization of the formulas for the normalization of tcriteria for the linear case [9]. Generalization is achieved by using the difference between the target and the current value in the modulus formula while maintaining the compression stretch coefficients and the displacement aj* (for normalization methods with displacement):

10.3

Generalization of Normalization Methods of Target Criteria for Linear Case

203

Fig. 10.4 Illustration of IZ transformation for target normalization (Eq. 10.6)—(a)–(d), and bias for target normalization (Eq. 10.9)—(e)–(h)

r ij = -

atj - aij kj

þ

atj - aj , kj

ð10:13Þ

where kj and aj* are defined by Eq. (10.4) according to the following detail for the normalization methods:

204

10

Normalization for the Case “Nominal Value the Best”

aj  = 0,

Max :

kj = amax j m

aj  = 0,

Sum :

ð10:14Þ

kj =

ð10:15Þ

aij i=1

Vec :

aj  = 0,

0:5

m

kj = i=1

aj  = aj min ,

Max‐Min : dSum :

aj  = aj max –kj ,

kj = amax - amin j j

ð10:16Þ ð10:17Þ

m

kj =

amax - aij j

ð10:18Þ

kj = stdi aij

ð10:19Þ

i=1

aj  = aj mean ,

Z‐score :

a2ij

Indeed, if aij < ajt, then we use linear normalization formula as: aij - aj : kj

ð10:20Þ

2atj - aij - aj : kj

ð10:21Þ

r ij = If aij ≥ ajt, then the formula is: r ij =

The advantage of formula (10.13) [or expanded (10.20) and (10.21)] over targetbased normalization by Eqs. (10.5)–(10.7) is that the compression and displacement ratio is consistent with the methods of linear normalization of the attributes of benefits and costs in accordance with the parameters of normalization (Eqs. 10.14– 10.19). Figure 10.5 is a graphical illustration of normalization methods of target criteria for six different variants of linear normalization for the goal maximization case. Initial data: a = (448, 478, 564, 580, 610, 615, 620, 667) correspond to the third attribute according to the illustration in Figs. 10.1 and 10.2. The target nominal score is 615. The abbreviation of the main normalization method of target nominal criteria has been added the prefix “t-” meaning “target,” for example, t-Max, t-Vec, etc. The disadvantage of formula (10.13) when used together with the corresponding linear normalization method is that the range of values according to Eq. (10.13) is smaller and the largest value is shifted to zero. It is necessary to equalize the upper values when maximizing or lower values when minimizing. This is important since the contribution of t-criteria to the integral indicator should not be lower. In some cases, formula (10.13) allows negative values. The result is determined by the

10.3

Generalization of Normalization Methods of Target Criteria for Linear Case

205

Fig. 10.5 An illustration of the agreement of the generalized method of normalization of t-criteria with the methods of normalization Max, Sum, Vec, dSum, Max-Min, Z-score (maximization)

position of the target value ajt in the interval [ajmin, ajmax]. You can fix the problem with the usual data shift. Displacement of normalized values within the area [0, 1] is performed by parallel shifting values along the ordinate axis. An illustration of the shift is shown in Fig. 10.5 for all methods of normalization with respect to the base dashed line. The displacement is determined by the value: Δr j = r j max - r j t :

ð10:22Þ

Then, the formula (10.13) is transformed to the form: rij = r ij þ r max - r tj , j

ð10:23Þ

which should be used as a formula in programming, i.e. three mathematical operations are performed sequentially: (1) for aj*, kj, ajt, ajmax calculate rjt, rjmax by Eq. (10.20):

rtj =

atj - aj , kj

ð10:24Þ

206

10

Normalization for the Case “Nominal Value the Best”

Fig. 10.6 Inversion of t-criteria (minimization)

r max = j

- aj amax j , kj

ð10:25Þ

(2) calculate rij by Eq. (10.13),

r ij = -

atj - aij kj

þ

atj - aj , kj

ð10:26Þ

(3) calculate new value rij by Eq. (10.23). - r tj : r ij = r ij þ r max j

ð10:27Þ

Parallel transfer obviously preserves the range (swing) of the normalized values, which is important for the final result when aggregating the normalized values. For the case of minimizing the performance indicator of alternatives, the target nominal normalization is obtained by inverting the values using the ReS-algorithm in accordance with Eq. (5.12). Figure 10.6 shows a graphical illustration of the normalization of target nominal criteria for the case of minimizing the performance indicator of alternatives.

10.4

10.4

Comparative Normalization of Target Criteria Using Linear Methods

207

Comparative Normalization of Target Criteria Using Linear Methods

Figure 10.7 is a graphical illustration of the relative position of the domains of normalized values for the target nominal normalization of the third attribute for various variants of linear normalization defined by formulas (10.5)–(10.7) and the proposed generalization of the target nominal normalization in accordance with the formulas (10.24)–(10.27) presented above in the section. Initial data: a = (267, 164, 220, 48, 210, 78, 215) correspond to the third attribute and correspond to the illustrations in Figs. 10.1–10.3. Normalization is performed for the case of maximizing the performance indicator of alternatives. Normalization by formula (10.5) corresponds to t-Max normalization, but the degree of data compression is somewhat lower. Normalization by formula (10.7) is identical to tMax-Min normalization. Normalization according to formula (10.6) corresponds to tMax-Min normalization, but the degree of compression is lower. For the Sum, Vec, dSum, Z-score normalization methods, the target normalization formulas (10.5)–(10.7) cannot be used, since the degree of data compression and domain shifting are very different. This entails the priority of the contribution of individual criteria to the performance indicator of alternatives and the distortion of the ranking. Therefore, in the case of using the linear methods Sum, Vec, dSum or Z-score when normalizing profit attributes, the generalization proposed by the author in the form of formulas (10.24)–(10.27) is an adequate method for normalizing target nominal criteria.

Fig. 10.7 Relative position of the domain of normalized values of the target nominal criterion (the third attribute in Fig. 10.1) with target normalization (Eqs. 10.5–10.7) and for various variants of linear and target nominal normalization

208

10.5

10

Normalization for the Case “Nominal Value the Best”

Normalization of Target Criteria: Non-linear Methods—Concept of Harrington’s Desirability Function

In this section, the concept of desirability functions (DFs) defined by Harrington [10, 11] is used to normalize target nominal criteria. The desirability function allows you to convert various scales of quality indicators to the set [0, 1]. The peculiarity is that the expert (experimenter) sets the specifications for such a function in accordance with individual preferences regarding the objects of study. The requirement for consistent normalization for the multivariate case assumes that the same normalization desirability function will be used for LTB, STB, NTB purposes. Harrington introduced two types of desirability functions that convert the Quality Measures (QMs) to (0, 1). One focuses on maximizing QMs (one-sided specification), while the other reflects the target value problem (two-sided specification). The desirability scale has the following standard gradations: “very well” (1.00–0.80), “well” (0.80–0.63), “satisfactory” (0.63–0.37), “bad” (0.37–0.20), “very bad” (0.20–0.00). The class boundaries correspond to the inflection points of DF, which characterizes the dynamics of increments. The desirability curve exhibits useful properties such as continuity, monotonous intensity, and smoothness. The desirability function converts the quantitative value of a particular indicator into an assessment of the desirability (preference) of the subject.

10.5.1 One-Sided DF for LTB and STB Criteria The one-sided DF of the jth feature for the case of maximizing QMs uses a special form of the Gompertz-curve [12], where the kurtosis of the function is determined by the solution (b0, b1) of a system of two linear equations: uij = exp - exp - rij , r ij = b0 þ b1  aij ,

ð10:28Þ

where the kurtosis of the function is determined by the solution {aj*, kj} of the system of two linear equations: b0 þ b1  aij = - ln - ln uij ,

ð10:29Þ

which require to establish a correspondence (expert) to two values of the jth attribute aj(1) < aj(2) rating on the desirability scale uj(1) < uj(2):

10.5

Normalization of Target Criteria: Non-linear Methods—Concept. . . ð1Þ

ð1Þ

M 1 aj ; uj

ð2Þ

ð2Þ

, M 2 aj ; u j

209

ð10:30Þ

,

These two values do not necessarily correspond to the values of the jth feature of the alternatives considered in the problem. The choice of a linear transformation in Eq. (10.28) is due to the possibility of interpreting the values of displacement and compression-stretching of the scale to the values b0 and b1. The explicit calculation formula for the parameters of the desirability function has the form:

b1 =

ð1Þ

= ln uj

ð2Þ

ð1Þ

ln ln uj

ð2Þ

ð10:31Þ

,

aj - aj ð2Þ

ð2Þ

b0 = - b1  aj - ln - ln uj

:

ð10:32Þ

The one-sided DF of the jth feature aimed at minimizing QMs (STB) is easily transformed using the ReS-algorithm of inversion (Eq. 10.28). This procedure is two-step. At the first step, it is required to match (expertly) two values of the jth attribute aj(1) < aj(2) with a rating on the desirability scale 1 > vj(1) > vj(2) > 0, just as it is done for maximization: ð1Þ

ð1Þ

M 1 aj ; v j

ð2Þ

ð2Þ

, M 2 a j ; vj

:

ð10:33Þ

Since STB is associated with cost criteria, a higher value of the indicator aj(2) corresponds to a lower value on the desirability scale. Given the symmetry of the desirability function about the u = 0.5 axis, it is easy to invert these values about the desirability scale and get the following pair of points: ð1Þ

ð1Þ

N 1 aj ; 1 - vj

ð2Þ

ð2Þ

, N 2 aj ; 1 - vj

:

ð10:34Þ

Two points N1 and N2 defined by coordinates (10.34) correspond to a monotonically increasing function and determine the LTB desirability function uij (Eq. 10.28)—more precisely, its parameters b0 and b1. At the second step, the values of uij are inverted using the ReS-algorithm: vij = - uij þ 1: Example 1: a3 = (448, 478, 564, 580, 610, 615, 620, 667), LTB: M1(480, 0.2), M2(650, 0.9),

ð10:35Þ

210

10

Normalization for the Case “Nominal Value the Best”

Fig. 10.8 One-sided DF for LTB and STB case: Gompertz-curve

STB: M1(500, 0.9), M2(600, 0.15). LTB analog for M1 and M2: N1(500, 0.1), N2(600, 0.85). One-sided DF for the case of LTB and STB using Gompertz-curve is shown in Fig. 10.8.

10.5.2 Two-Sided DF for the NTB Criteria The two-sided DF for the jth feature meets the goal NTB that requires two specification limits ajU (Upper) and ajL (Lower) for QMs: uij = exp - r ij r ij =

L 2  aij - aU j þ aj

aU j

- aLj

=

n

,

aij - ainv ij 2 ½ - 1, 1: L aU a j j

ð10:36Þ ð10:37Þ

The target value of the feature ajT is set. Formula (10.37) assumes that three specifications are given: ajT, as well as ajL and ajU, the lower and upper levels of acceptable attribute values. The calculation of ajU and ajL based on the symmetry of ajU and ajL with respect to ajT is performed according to the following scheme: aTj =

L aU j þ aj , 2

ð10:38Þ

10.5

Normalization of Target Criteria: Non-linear Methods—Concept. . .

211

max Δj = max aTj - amin - aTj , j ; aj

ð10:39Þ

aLj = aTj - Δj ,

ð10:40Þ

T aU j = aj þ Δ j :

ð10:41Þ

The parameter n > 0 is to be chosen, so that the resulting kurtosis of the function adequately meets the expert’s preferences. To do this, it is required to establish a correspondence (expert) between one value of the jth attribute and its rating on the desirability scale, i.e. set point M1(aj(1), uj(1)). This value does not necessarily correspond to the values of the jth feature of the alternatives considered in the problem. The calculation formula for finding n is: ð1Þ

n=

ln - ln uj ð1Þ

:

ð10:42Þ

ln r j

For the symmetrical case, ajmax and ajmin are taken as ajU and ajL, respectively, where:

r ij =

2  aij - amax þ amin j j - amin amax j j

=

aij - ainv ij amax - amin j j

2 ½ - 1, 1:

ð10:43Þ

Then max ainv þ amin = ReS aij ij = - aij þ aj j

ð10:44Þ

represents the inverse of aij, which is determined by the ReS-algorithm. An unexpected interpretation of the argument rij in formula (10.43) is as follows. According to Chap. 5, the inverse value aijinv is centrally symmetric to the value aij with respect to the value (ajmax+ajmin)/2. Therefore, rij represents the ratio of the lengths of two centrally symmetric segments. For the above example, for j = 3, a = (448, 478, 564, 580, 610, 615, 620, 667) and r43 graphically it looks like this (Fig. 10.9) and represents the ratio of the length of the blue segment to the red one. Example 2: a3 = (448, 478, 564, 580, 610, 615, 620, 667), NTB: a3T = 600, M1(560, 0.75) ) (Eqs. 10.39–10.42) a3L = 448, a3U = 752, n = 0.933. Two-sided desirability functions for the case of maximizing NTB are shown in Fig. 10.10.

212

10

Normalization for the Case “Nominal Value the Best”

Fig. 10.9 Graphical interpretation of the argument r in the formula (10.36). Two-sided DF for NTB case

Fig. 10.10 Two-sided DF for the NTB maximization case

How to get two-sided DFs for minimization case? The answer is simple— inversion: uij = 1 - exp - r ij

n

,

ð10:45Þ

10.5

Normalization of Target Criteria: Non-linear Methods—Concept. . .

10.5.3

213

Consistent DF-Normalization for LTB, STB, and NTB Criteria

Consistent DF-normalization of the decision matrix for the case of LTB, STB, and NTB involves the application of formulas (10.28) and (10.36) and the identification of parameters. Let’s use an example to normalize. Example 3: Let the following decision matrix D0 [8×5] be given (Table 10.1): Each alternative is defined by a set of five attributes in the context of the selected criteria. The first and fourth signs are attributes of the benefit. The second and fifth feature is a cost attribute. For the third feature, the target is the nominal value a3T = 600. Set two points M1(aj(1), uj(1)) M2(aj(2), uj(2)) for Eq. (10.28) or one M1(aj(1), uj(1)) for Eq. (10.36), establishing a correspondence between the natural scale and the desirability scale for each criterion: að1Þ = ½4300 73 560 135 1300 , uð1Þ = ½ 0:2

0:9

0:75

að2Þ = ½ 5500

80

0

175 2300 ,

0:1

0

0:85

u

ð2Þ

= ½ 0:9

0:81 ,

0:15

0:1 ,

Set the target value for NTB criterion C3: T =½0

0

600

0

0 ,

Consistent DF-normalization using formulas (10.28) and (10.36) of the decision matrix for the case of LTB, STB, and NTB is shown in Fig. 10.11.

Table 10.1 The decision matrix D0 and LTB, STB, and NTB criteria

Benefit(+1)/cost(–1)/target(0) Alternatives A1 A2 A3 A4 A5 A6 A7 A8

Criteria C1 +1 6500 5800 4500 5600 4200 5900 4500 6000

C2 -1 85 83 71 76 74 80 71 81

C3 0 667 564 478 620 448 610 478 580

C4 +1 140 145 150 135 160 163 150 178

C5 -1 1750 2680 1056 1230 1480 1650 1056 2065

214

10

Normalization for the Case “Nominal Value the Best”

Fig. 10.11 Consistent DF-normalization for the maximization case

Fig. 10.12 Consistent Max-Min normalization for the maximization case

Fig. 10.13 The position of domains when using DF-normalization and Max-Min normalization

For comparison, Figs. 10.12 and 10.13 show matched normalization using Max-Min and tMax-Min normalization according to Eqs. (10.24)–(10.27) of the same decision matrix for the case of LTB, STB, and NTB criteria.

10.5

Normalization of Target Criteria: Non-linear Methods—Concept. . .

215

Table 10.2 Result of ranking of the alternatives. SAW method Alternatives: Rank Score Qi (SAW) Rank Score Qi (SAW)

A1 A2 Max-Min 8 6 3.356 3.336 DF-normalization 6 4 3.255 3.163

A3

A4

A5

A6

A7

A8

4 3.053

3 2.917

7 2.680

5 2.412

1 2.38

2 1.907

3 3.066

8 2.949

7 2.932

5 2.574

1 2.258

2 2.141

Comparison of the position of domains when using DF-normalization and Max-Min normalization is shown in Fig. 10.13. Considering that strictly monotonic functions are used for normalization, the ordering of normalized values does not change. Although visually there is a correspondence between the two normalization methods, the ranking of alternatives for different normalization methods by the SAW method (equal weights) is different (Table 10.2). If we compare the performance indicators of alternatives Qi, then we can conclude that this example has a high sensitivity to the choice of the normalization method, since the performance indicators of alternatives differ within 5%. Nevertheless, this indicates the need to analyze the reasons for changing the ranking when choosing a data normalization method.

10.5.4

The Desirability Function: Power Form

It is possible to use several different variants of the desirability function. A variant of the desirability function defined by specifying two specification limits lower (L ) and upper (U ). For the goal (T ), the nominal best has the form [13, 14]: A variant of the desirability function for the goal: nominal best, has the form: 0, aij ≤ aLj r ij = d aij =

aij - aLj aTj - aLj

s

aij - aU j aTj - aU j

t

, aLj < aij ≤ aTj , ,

aTj

ð10:46Þ

< aij ≤ aU j

0, x > aU j The target value of the feature ajT is set. Formula (10.46) involves setting three specifications: ajT, and ajL and ajU, the lower and upper levels of acceptable attribute values.

216

10

Normalization for the Case “Nominal Value the Best”

Fig. 10.14 Illustration of the desirability function (b, c, d, magenta) for various choices of specification limits L, U with respect to the largest and smallest feature values

The calculation of ajU and ajL based on the symmetry of ajU and ajL with respect to ajT is performed according to the scheme (Eqs. 10.38–10.41). The weights s and t determine how strongly the goal is desired. For values s = t = 1 specified by the expert, the desirability function increases linearly toward T (goal), for s < 1, t < 1 the function is convex, and for s > 1, t > 1 the function is concave. For a particular case, in the linear version (s = t = 1) with ajL = ajmin and ajU = ajmax formula (10.46) coincides with formula (10.7). As in the case of Harrington’s DFs, to determine the indicators s and t, it is necessary to set two points of correspondence between the desirability scale (this will no longer be Harrington’s desirability scale) and the natural scale. The weights s and t provide more flexibility in assigning individual desirability within the range of interest (Fig. 10.14a). The pair of values {mina, maxa} define the limit levels of the attribute available in the task of selecting alternatives. The pair {L, U} with L > mina and U < maxa determines the levels of acceptable attribute values (Fig. 10.14b). The value U > maxa corresponds to the ideal value of the attribute, and L < mina corresponds to the anti-ideal (Fig. 10.14c). A mixed choice of specification limits for the desirability function is shown in Fig. 10.14d.

10.5

Normalization of Target Criteria: Non-linear Methods—Concept. . .

217

Consistent normalization of the decision matrix for the case of using a DF-power function in the normalization of NTB criteria involves the use of Max-Min normalization for LTB, STB criteria. This is due to both the same range of normalized values and approximately the same disposition of values during Max-Min normalization and normalization using the DF-power function.

10.5.5 The Desirability Function: Gaussian Form The choice of two-sided boundaries can be eliminated using the arithmetic mean and standard deviation of attribute values over a set of alternatives, which is appropriate for a normal or symmetrical distribution. In the case of a symmetrical distribution of attributes with respect to the target, a variant of target normalization in the form of a Gaussian function is possible:

r ij = exp -

aij - aTj 2σ 2j

2

,

ð10:47Þ

With a normal or symmetrical distribution, there is the following relationship between the standard deviation and the number of observations: 1. in the interval (x¯ ± 1∙σ)—68.3% of the total number of observations, 2. in the interval (x¯ ± 2∙σ)—95.4% of the total number of observations, 3. in the interval (x¯ ± 3∙σ)—99.7% of the total number of observations. In practice, deviations greater than 3∙σ are very rare, so such deviations are usually considered as the maximum possible (three sigma rule) provided that the value is true, and not obtained from processing samples. The standard deviation, as well as the mean deviation, shows the absolute deviation of the measured values from the arithmetic mean, that is, how, on average, specific variants of the descriptor deviate from the mean observation. The target value of the feature ajT is the initial data. Formula (10.47) involves setting the parameter σ j, which is an analog of the standard deviation. Since the true value of this parameter for the analyzed trait is unknown, the decision maker expertly determines the compliance scale (desirability scale), by setting one correspondence point on the desirability scale and a natural scale (this will no longer be Harrington’s desirability scale). Thus, σ j is determined by solving the equation for a given pair (aj(1), rj(1)):

218

10

Normalization for the Case “Nominal Value the Best”

Fig. 10.15 The target normalization in the form of a Gaussian function

ð1Þ

ð1Þ rj

= exp -

aj - aTj 2σ 2j

2

,

ð10:48Þ

An example of using the Gaussian function can be found, for example, in study [15]. Various options for target normalization in the form of a Gaussian function are shown in Fig. 10.15. In Fig. 10.15 the second scale (r) is plotted along the abscissa in units of the standard deviation of the Z-score. Consistent normalization of the decision matrix for the case of using the Gaussian function when normalizing NTB criteria is possible using linear normalization procedures for LTB, STB criteria and subsequent use of the IZ transformation. IZ transformation transforms the normalized values to the same range of values chosen by the decision maker while maintaining the disposition.

10.6

Conclusions

In the problems of multi-criteria decision-making, the joint coordinated transformation of values for three types of criteria—costs, benefits, and face value—is relevant. Nominal value of attribute is the some intermediate value between the highest and

References

219

lowest represents the optimum characteristic or performance or may be specified by the customer. Existing methods for transforming attributes of a nominal type based on the target normalization approach produce areas of normalized values that can differ significantly from the area of values of profit and cost criteria. A number of existing target normalization formulas implement transformations similar to the Max and Max-Min normalization methods used for benefit and cost criteria. Despite this, the degree of data compression for NTB criteria differs from data compression for LTB and STB criteria. Therefore, the existing target normalization methods are limited: target normalization methods, for example, cannot be used in conjunction with methods such as Sum, Vec, dSum, or Z-score. In Sect. 10.3, the author proposes a generalization of normalization methods for target nominal criteria that provides consistency with the main linear methods for normalizing benefit and cost attributes. The author finds it very relevant to study the harmonization of normalized scales in multivariate normalization. The key issue is to determine the balance between the compression of data from different measurements and the bias of the normalized values.

References 1. Mukhametzyanov, I. Z. (2020). ReS-algorithm for converting normalized values of cost criteria into benefit criteria in MCDM tasks. International Journal of Information Technology and Decision Making, 19(5), 1389–1423. https://doi.org/10.1142/S0219622020500327 2. Mukhametzyanov, I. Z. (2023). Elimination of the domain’s displacement of the normalized values in MCDM tasks: The IZ-method. International Journal of Information Technology and Decision Making. https://doi.org/10.1142/S0219622023500037 3. Mukhametzyanov, I. Z. (2023). On the conformity of scales of multidimensional normalization: An application for the problems of decision making. Decision Making: Applications in Management and Engineering. 6(1), 399–341. https://doi.org/10.31181/dmame05012023i 4. Zhou, P., Ang, B. W., & Poh, K. L. (2006). Comparing aggregating methods for constructing the composite environmental index: An objective measure. Ecological Economics, 59(3), 305–311. 5. Wu, H.-H. (2020). A comparative study of using grey relational analysis in multiple attribute decision making problems. Quality Engineering, 15, 209–217. https://doi.org/10.1081/ QEN-120015853 6. Jahan, A., & Edwards, K. L. (2015). A state-of-the-art survey on the influence of normalization techniques in ranking: Improving the materials selection process in engineering design. Materials & Design, 65, 335–342. https://doi.org/10.1016/j.matdes.2014.09.022 7. Jahan, A., Bahraminasab, M., & Edwards, K. L. (2012). A target-based normalization technique for materials selection. Materials & Decision, 35, 647–654. https://doi.org/10.1016/j.matdes. 2011.09.005 8. Jahan, A., Mustapha, F., Ismail, M. Y., Sapuan, S. M., & Bahraminasab, M. A. (2011). A comprehensive VIKOR method for material selection. Materials & Decision, 32(3), 1215–1221. https://doi.org/10.1016/j.matdes.2010.10.015 9. Mukhametzyanov, I. Z. (2023). Normalization of target-nominal criteria for multi-criteria decision-making problems (Lecture Notes in Electrical Engineering: Select Proceedings of Computational Intelligence for Engineering and Management Applications). Springer. https:// doi.org/10.1007/978-981-19-8493-8_67

220

10

Normalization for the Case “Nominal Value the Best”

10. Harrington, J. (1965). The desirability function. Industrial Quality Control, 21(10), 494–498. 11. Bikbulatov, E. S., & Stepanova, I. E. (2011). Harrington’s desirability function for natural water quality assessment. Russian Journal of General Chemistry, 81, 2694–2704. 12. Trautmann, H., & Weihs, C. (2006). On the distribution of the desirability index using Harrington’s desirability function. Metrika, 63, 207–213. https://doi.org/10.1007/s00184-0050012-0 13. Aksezer, C. S. (2008). On the sensitivity of desirability functions for multiresponse optimization. Journal of Industrial and Management Optimization, 4(4), 685–696. https://doi.org/10. 3934/jimo.2008.4.685 14. Marinković, V. (2021). Some applications of a novel desirability function in simultaneous optimization of multiple responses. FME Transactions, 49, 534–548. https://doi.org/10.5937/ fme2103534M 15. Shih, H.-S., Shyur, H.-J., & Lee, E. S. (2007). An extension of TOPSIS for group decision making. Mathematical and Computer Modelling, 45(7–8), 801–813. https://doi.org/10.1016/j. mcm.2006.03.023

Chapter 11

Comparative Results of Ranking of Alternatives Using Different Normalization Methods: Computational Experiment

Abstract This chapter presents a comparative analysis of the ranking of alternatives when applying various normalization methods based on a numerical experiment. Calculations and analysis were performed for two problems of multi-criteria choice. The first problem has a weak sensitivity to the normalization method, and the second one has a strong sensitivity. Both problems (decision matrices) are described in Chap. 6. 238 different rank models are built, combining 13 aggregation methods and 21 different normalization methods, all other things being equal. To compare the results, the ranking was also performed within seven outranking models that do not use data normalization. The ranking results for 238 different models are aggregated under the Borda voting concept, in which different models are defined as “electors.” The use of a many number of models or a computational experiment makes it possible to establish the sensitivity of the multi-criteria choice problem to the decision matrix normalization procedure. Keywords MCDM rank model · Multivariate normalization · Sensitivity analysis to normalization · Distinguishability of ratings

11.1

Methodology of Computational Experiment

The methodology for constructing a rank model is used, which is described in detail in Chap. 2. The MCDM rank model for each alternative Аi determines the value of Qi, an indicator of efficiency, on the basis of which the ranking of alternatives is carried out and subsequent decision-making: Q = f ðA, C, DM, 0 ω0 , 0 norm0 , 0 dm0 Þ:

ð11:1Þ

The MCDM rank model includes the choice of a set of alternatives (A) and a set of criteria (C), an assessment of the values of the attributes of alternatives in the context of each criterion—a decision matrix (DM), a method for estimating the weight of criteria (‘w’), a choice of a normalization method (‘norm’) decision matrices, selection of a metric for calculating distances in the n-dimensional space of criteria © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3_11

221

222

11

Comparative Results of Ranking of Alternatives Using. . .

(‘dm’), determination of the attribute aggregation method ( f ) for calculating the performance indicator (Q) of each alternative. The ranking of alternatives was carried out using 11 different methods: SAW, WPM, WAPRAS, MABAC, CODAS, COPRAS, TOPSIS(L1), TOPSIS(L2), GRA, GRAt, VIKOR [1–8]. All algorithms are also described in detail in Chap. 2. To compare the results, the ranking was also performed within seven outranking models [9–11] that do not use data normalization—these are two methods: PROMETHEE-II with four options for choosing a preference function (V-Shape; Linear; Gauss and Linear-Gauss combination) and the ORESTE method with three distance metric options (L1, L2, L1). MCDM models use 21 different normalization methods in five groups: 1. 2. 3. 4. 5.

Max, Sum, Vec, dSum, Max-Min, Z[0,1], mIQR[0,1], mMAD[0,1], IZ(Max,4), IZ(Sum,4), IZ(Vec,4), IZ(dSum,4), MS(Max,4), MS(Sum,4), MS(Vec,4), MS(dSum,4), PwL[0,1], SSp[0,1], Sgm[0,1], Sgm(Z), Sgm(IQR).

The first group has four linear methods Max, Sum, Vec, dSum without displacement with a range of values [0, 1]. The second group has four linear methods with a displacement: Max-Min, Z-score and mIQR, mMAD analogs with a transformed range [0, 1]. In the third and fourth groups, respectively, the IZ and MS transformation of the domains of normalized values are used, which are determined by the normalization methods Max, Sum, Vec, dSum. The boundaries of the transformation [I, Z] are defined as follows: I=median(rjmin), Z = 1 [12, 13]. The fifth group uses 5 non-linear normalization methods with a range of [0, 1] using the weakeningboosting technique of normalized values. For all normalization methods, if the aggregation method involves the inversion of cost criteria attributes into benefit criteria, the ReS-algorithm was used [14]. Criteria weights are the same for all calculation cases.

11.2

Normalization Methods

Tables 11.1, 11.2, 11.3, 11.4 and 11.5 show the normalization formulas (algorithms) used in the computational experiment [12–14].

11.3

A Decision Matrix Generation with High Sensitivity of Rank to the Normalization Methods

Table 11.6 provides input for a standard decision problem that is used throughout the book as a base example.

11.3

A Decision Matrix Generation with High Sensitivity of Rank to. . .

223

Table 11.1 Linear methods of multidimensional normalization with range (0, 1] Norm () Max

Formula

Sum

r ij =

rij =

aij m i=1

Vec

r ij =

jaij j aij

Displacement of the lower and upper levels of the normalized values of various features relative to each other

m i=1

dSum

Comment Displacement of the lower level of the normalized values of various features relative to each other Displacement of the lower and upper levels of the normalized values of various features relative to each other

aij amax j

r ij = 1 -

a2ij ajmax - aij

m i=1

- aij Þ ðamax j

Displacement of the lower level of the normalized values of various features relative to each other (less than the Max-method)

Table 11.2 Linear methods of multidimensional normalization with range [0, 1] Norm() Max-Min

Formula

Z[0, 1] Z-score

rij = ij sj j uij = uij – mini minj (rij) u* = maxi maxj (uij) vij = uij /u*

rij =

aij - amin j amax - amin j j a -a

mIQR[0, 1] Interquartile Range

r ij =

aij - mdj IQRj

mMAD[0,1] Median Absolute Deviation

rij =

aij - mdj smed j

Comment Range: [0, 1] aj =

1 m



m i=1

aij , sj =

1 m



m

aij - aj

i=1

2

0:5

The normalized Z-values of various features are transformed into [0, 1]. Similar to MS, given that (Z – I) = 1, 1 – maxi maxj (vij) = 0. mdj = mediani(aij), kj = IQRj The transformation is performed in such a way that the median value and interquartile IQR range of all features are the same. mdj = mediani(aij), smed = j

1 m



m i=1

aij - mdj

2

0:5

The transformation is performed in such a way that the median value and variances relative to the median of all features are the same.

Decision matrix, dimensions [8×5]—8 alternatives and 5 criteria. Each alternative is defined by a set of five attributes in the context of the selected criteria. The third and fifth features are cost attributes. This means that when choosing an alternative, smaller feature values are preferred. Let us generate a decision matrix D1 that is sensitive to the choice of the normalization method (with other parameters of the decision model being the same). The technique for generating such a decision matrix D1 is based on the generation of random values (uniform law). It is necessary for each attribute to generate m random values (m alternatives) from the range of values determined by setting the range of values. This range is defined by the range from the smallest to the largest value. The algorithm for generating such a matrix has the following simple form:

224

11

Comparative Results of Ranking of Alternatives Using. . .

Table 11.3 IZ-method of transformation in the domain of normalized values Norm() IZ

Formula Comment 1) Normalization is performed by one of the Norm() method. Any method (both linear and non-linear). 2) Determine the boundaries of the interval [I, Z] of normalized values that are consistent for all features. Options: (1) I = min(rjmin), Z = max(rjmax) (2) I = max(rjmin), Z = max(rjmax) (3) I = mean(rjmin), Z = mean(rjmax) (4) I = median(rjmin), Z = median(rjmax) 3) Perform the transformation: uij =

IZ (Max,4)

IZ (Sum,4)

IZ (Vec,4)

IZ (dSum,4)

r ij - rmin j rjmax - r min j

 ðZ - IÞ þ I, 8i = 1, m; 8j = 1, n. (a) Linear transformation (alignment) of the lower bound of the normalized values for the Max-method

Eq. (a) I = median (rjmin), Z=1 Eq. (a) I = median (rjmin), Z = median (rjmax) Eq. (a) I = median (rjmin), Z = median (rjmax) Eq. (a) I = median (rjmin), Z=1

Linear transformation (alignment) of the lower and upper limits of the normalized values for the Sum method

Linear transformation (alignment) of the lower and upper limits of the normalized values for the Vec method

Linear transformation (alignment) of the lower bound of the normalized values for the dSum method

Table 11.4 MS-method of transformation in the domain of normalized values Norm() MS(Max,4) MS(Sum,4) MS(Vec,4) MS(dSum,4)

Formula aij - aj sj

uij = u1ij = uij – mini minj (uij) u* = maxi maxj (u1ij) vij = u1ij ∙(Z–I)/u* voutij = vij + 1 – maxi maxj (vij)

Comment 1) The decision matrix is normalized using the Z-score method: uij = Z(aij) 2) The decision matrix is normalized by one of the Max, Sum, Vec, dSum: rij = Norm(aij) 3) Determine the boundaries of the interval [I, Z] consistent for all features corresponding to one of the Max, Sum, Vec, dSum methods (as in the IZ transformation), for example, type (4): I = median(rjmin), Z = median(rjmax), where rij = Max(aij) 4) Transform using linear transformations uij into the interval [I, Z] in such a way that the average values and variances of all features are the same

11.3

A Decision Matrix Generation with High Sensitivity of Rank to. . .

225

Table 11.5 Non-linear methods of multidimensional normalization with range [0, 1] Norm () PwL [0,1]

Formula rij =

aij - amin j amax - amin j j

f r ij =

SSp [0,1]

rij =

2 ½0, 1

0, r ij ≤ pj r ij - pj , p < r ij ≤ qj qj - pj j 1, r ij > qj

aij - amin j amax - amin j j

aj =

0, r ij ≤ pj 2

f r ij =

r ij - pj 2 , pj qj - pj

1-2 

qj - rij qj - pj

1, r ij > qj Sgm [0,1]

rij =

Sgm (Z)

zij =

aij - amin j amax - amin j j

f r ij =

1 1þe - kj ðrij - pj Þ

aij - aj sj

r ij = =

1 1þe - 3zij

Comment pj, qj (0 ≤ pj < qj ≤ 1), Range: [0, 1] The pj value determines the limit of weakening the influence of alternatives with attributes worse than pj. Similarly, qj defines the bound on the amplification of the influence of alternatives that have better attributes than qj. The “attenuation-amplification” parameters of influence pj, qj are set for each criterion in fractions of unity (or percentages), which is easier to determine intuitively, in contrast, if they are determined for natural attribute values.

pj þ qj < r ij ≤ 2 2 pj þ qj < rij ≤ qj , 2

1 m

zij = r ij =

aij - mdj IQRj = 1þe1- 3zij



m i=1

m i=1

aij , sj =

aij - aj

2

0:5

Represents a smooth, spline-based counterpart of the PwL function kj—slope factor, pj—point of symmetry center, f( pj) = 0.5. Represents the smooth S-shaped counterpart of the PwL function. Normalized values gently approach their maximum and minimum values, but never reach them. mdj = mediani(aij), sj = 1 m

Sgm (IQR)



1 m



m i=1

aij - aj

2

0:5

In the convex profile region of the S-shaped function, the transformation compresses the data to 1, and in the concave profile region, it compresses the data to zero. Outliers in data that cannot be excluded from consideration are smoothed out. Normalized values gently approach their maximum and minimum values, but never reach them. mdj = mediani(aij), kj = IQRj Similar to the previous transformation, but uses robust parameters

226

11

Comparative Results of Ranking of Alternatives Using. . .

Table 11.6 Decision matrix D0 [8×5] Benefit(+)/cost(–) Alternatives A1 A2 A3 A4 A5 A6 A7 A8 Table 11.7 Decision matrix D1 [8×5] Benefit(+)/cost(–) Alternatives A1 A2 A3 A4 A5 A6 A7 A8

Criteria C1 + 6500 5800 4500 5600 4200 5900 4500 6000

Criteria C1 + 4728.0 6081.2 5543.9 5888.1 4552.0 4962.8 5565.2 6257.4

C2 + 85 83 71 76 74 80 71 81

C2 + 81.4 84.5 82.4 71.2 79.7 83.3 72.2 73.1

C3 – 667 564 478 620 448 610 478 580

C4 + 140 145 150 135 160 163 150 178

C3 – 596.0 567.3 567.2 558.9 630.1 533.7 546.4 558.9

C4 + 172.5 157.9 136.1 176.5 173.3 155.6 168.3 145.5

C5 – 1750 2680 1056 1230 1480 1650 1056 2065

C5 – 1148.3 2389.4 2217.9 1496.6 2675.4 1550.7 1588.6 1372.6

Let D[m×n] matrix rng=max(D)-min(D) [1×n]—attribute range vector range=repmat(rng,m,1) [m×n]—attribute range matrix t0=repmat(min(D),m,1) [m×n]—matrix of the lower bounds of attributes t=rand(m,n)range(m,n) [m×n]—matrix: direct product of a random matrix and a range matrix D=t0+ t

The decision matrix D obtained in this way represents the same decision problem as defined by the matrix D0, but with a different set of alternatives. Next, for each such decision matrix, ranking is performed using the selected aggregation method with variations in the normalization procedure and with other parameters of the MCDM model fixed (for example, criteria weights). The iterative search procedure D1 ends when, for the selected set of normalization methods, all alternatives of I-rank satisfy the given requirement, for example, all are different or all are the same. Table 11.7 presents the decision matrix D1 (as a pair for the decision matrix D0), which has a high rank sensitivity for the TOPSIS aggregation method using the Max, Sum, Vec, dSum, Max-Min normalization methods.

11.4

11.4

Graphical Illustration of Normalized Values

227

Graphical Illustration of Normalized Values

A graphical illustration of the domains of normalized values for the decision matrix D0 is shown in Figs. 11.1 and 11.2. For the linear normalization methods Max, Sum, Vec and dSum, there is a significant shift in the areas of normalized values relative to each other, which may lead to a change in the rating of alternatives due to normalization. The IZ transform removes the offset. MS-transformation transforms Z-score values into the area [I, Z] with the boundary of the interval I = median(rjmin), Z = 1, where rij are defined by the Max, Sum, Vec, dSum methods, respectively. An important result of such a transformation is the equality of means for all features, and the equality of variances for all features. Transforming data into areas defined by the Max, Sum, Vec, dSum methods establishes a relationship between the normalized values similar to these methods, which ensures their appropriate and identical interpretation for all features as a fraction of the whole.

Fig. 11.1 Normalized values of D0 matrix for Max, Sum, Vec, and dSum normalization methods, and after and applying IZ and MS transformation

228

11

Comparative Results of Ranking of Alternatives Using. . .

Fig. 11.2 Normalized values of matrix D0 for Max-Min, Z[0,1], mIQR[0,1], mMAD[0,1] normalization methods and non-linear methods PwL[0,1], SSp[0,1], Sgm[0,1], Sgm(Z), Sgm(IQR)

Figure 11.2 is an illustration of domains of normalized values for a matrix for the Max-Min method with a range of [0, 1], Z-score, and analogs mIQR, mMAD with a transformed range in [0, 1], as well as five non-linear normalization methods with a range of values [0, 1]. If for normalized values Z[0,1] there is equality of means for all features, and equality of variances for all features, then for normalized values mIQR[0,1], mMAD [0,1] there is equality of medians for all features, the equality of the interquartile range (for mIQR) for all features and the equality of variances relative to the median (for mMAD). Nonlinear methods use a two-step normalization procedure. First, the data is converted to the interval [0, 1] using the Max-Min method, or Z-scores are calculated, and then converted using non-linear functions. The presented non-linear methods allow weakening “weak” values and strengthening “strong” values.

11.5 Results of Ranking of Alternatives for Decision Matrix D0 The results of ranking of alternatives for the decision matrix D0 within 231 different ranking models, combining 11 aggregation methods and 21 different normalization methods, all other things being equal, are presented in Tables 11.8, 11.9 and 11.10. To compare the results, the ranking was also performed within seven outranking models that do not use data normalization. The ranking results for 238 different models are aggregated under the Borda voting concept, in which different models are defined as “electors.” The use of a large number of models or a computational experiment makes it possible to establish the sensitivity of the multi-criteria choice problem to the normalization procedure of decision matrix.

8

1

8

5

1

4

2

A4

A5

A6

A7

A8

4

1

2

6

8

5

4

1

2

A3

A4

A5

A6

A7

A8

8

4

6

8

5

4

1

2

A3

A4

A5

A6

A7

A8

7

3

6

A1

A2

A3

MABAC

5

3

6

3

7

2

1

4

8

5

6

3

7

2

1

4

5

8

6

3

7

2

1

8

4

5

6

3

7

Rank

Vec

6

8

7

4

3

5

2

1

6

8

7

4

3

5

2

1

6

8

7

4

3

5

2

1

6

8

7

dSum

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

IZ (Max,4)

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

IZ (Sum,4)

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

IZ (Vec,4)

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

IZ (dSum,4)

6

8

7

4

3

2

5

1

6

8

7

4

3

2

5

1

6

8

7

4

2

3

5

1

6

8

7

MaxMin

6

8

7

2

4

3

5

1

6

8

7

2

3

4

5

1

6

8

7

4

2

3

5

1

6

8

7

Z [0,1]

6

8

7

2

4

3

5

1

6

8

7

2

3

5

4

1

6

8

7

2

4

3

5

1

6

8

7

mIQR [0,1]

6

8

7

2

4

3

5

1

6

8

7

2

3

5

4

1

6

8

7

4

2

3

5

1

6

8

7

mMAD [0,1]

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

MS (Max,4)

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

MS (Sum,4)

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

MS (Vec,4)

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

MS (dSum,4)

6

8

7

4

3

2

1

5

6

8

7

3

4

2

1

5

6

8

7

4

3

2

5

1

6

8

7

PwL [0,1]

8

6

7

4

3

2

1

5

8

6

7

3

4

2

1

5

8

6

7

4

3

2

1

5

8

6

7

SSp [0,1]

8

6

7

4

3

2

5

1

8

6

7

3

4

2

5

1

8

6

7

4

3

2

5

1

8

6

7

Sgm [0,1]

6

8

7

4

3

5

1

2

6

8

7

4

3

5

1

2

6

8

7

4

2

1

5

3

6

8

7

Sgm (IQ)

(continued)

6

8

7

4

3

1

2

5

6

8

7

4

3

1

5

2

6

8

7

4

3

2

1

5

6

8

7

Sgm (Z)

Results of Ranking of Alternatives for Decision Matrix D0

6

3

7

2

1

6

3

7

A2

7

8

6

A1

WASPAS

5

3

3

7

A2

7

2

5

6

A1

WPM

4

6

A3

3

3

A2

7

Sum

7

Max

A1

SAW

Alternatives

Normalization methods

Table 11.8 Ranking results (D0). Aggregation methods based on additivity

11.5 229

5

8

5

1

4

2

A4

A5

A6

A7

A8

2

1

8

4

Sum

Alternatives

Max

2

1

8

4

5

Vec

Normalization methods

Table 11.8 (continued)

4

3

5

2

1

dSum

4

2

3

5

1

IZ (Max,4)

4

2

3

5

1

IZ (Sum,4)

4

2

3

5

1

IZ (Vec,4)

4

2

3

5

1

IZ (dSum,4)

4

2

3

5

1

MaxMin

4

2

3

5

1

Z [0,1]

2

4

3

5

1

mIQR [0,1]

4

2

3

5

1

mMAD [0,1]

4

2

3

5

1

MS (Max,4)

4

2

3

5

1

MS (Sum,4)

4

2

3

5

1

MS (Vec,4)

4

2

3

5

1

MS (dSum,4)

4

3

2

5

1

PwL [0,1]

4

3

2

1

5

SSp [0,1]

4

3

2

5

1

Sgm [0,1]

4

3

2

1

5

Sgm (Z)

4

2

1

5

3

Sgm (IQ)

230 11 Comparative Results of Ranking of Alternatives Using. . .

1

8

2

4

5

6

1

8

2

A3

A4

A5

A6

A7

A8

4

8

8

6

5

1

4

2

A3

A4

A5

A6

A7

A8

8

2

3

4

6

5

1

8

2

A3

A4

A5

A6

A7

A8

1

6

5

4

3

7

A2

7

2

1

5

A1

TOPSIS, L1

6

3

3

7

A2

7

5

4

A1

COPRAS

6

3

7

7

3

Sum

A2

Max

A1

CODAS

Alternatives

2

8

1

6

5

4

3

7

2

1

4

8

6

5

3

7

2

8

1

6

5

4

7

3

Vec

Normalization methods

4

3

5

2

1

6

8

7

4

3

5

2

1

6

8

7

4

3

5

6

2

8

1

7

dSum

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

6

8

1

7

IZ (Max,4)

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

6

3

5

8

1

7

IZ (Sum,4)

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

6

3

5

8

1

7

IZ (Vec,4)

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

3

5

6

8

1

7

IZ (dSum,4)

4

2

3

5

1

6

8

7

4

2

1

5

6

3

8

7

4

2

3

5

6

1

8

7

MaxMin

3

5

4

6

1

2

7

8

3

5

4

6

7

2

8

1

4

2

6

3

5

1

8

7

Z [0,1]

3

5

4

6

7

1

2

8

3

5

4

6

7

8

2

1

2

4

6

5

8

1

3

7

mIQR [0,1]

3

5

4

6

1

2

7

8

3

5

4

6

7

2

1

8

4

2

6

3

5

1

8

7

mMAD [0,1]

Table 11.9 Ranking results (D0). Aggregation methods based on distances to a critical link

3

5

4

6

1

2

7

8

3

5

4

6

2

7

1

8

4

2

6

5

3

8

1

7

MS (Max,4)

3

5

4

6

1

2

7

8

3

5

4

6

2

7

1

8

4

2

6

5

3

8

1

7

MS (Sum,4)

3

5

4

6

1

2

7

8

3

5

4

6

2

7

1

8

4

2

6

5

3

8

1

7

MS (Vec,4)

3

5

4

6

1

2

7

8

3

5

4

6

2

7

1

8

4

2

6

5

3

8

1

7

MS (dSum,4)

4

3

2

5

1

6

8

7

4

2

1

5

6

3

8

7

4

2

3

5

1

6

8

7

PwL [0,1]

4

3

2

1

5

8

6

7

4

2

1

3

5

6

8

7

4

2

3

5

1

8

6

7

SSp [0,1]

4

3

2

5

1

8

6

7

4

2

1

3

5

6

8

7

4

3

2

5

1

8

6

7

Sgm [0,1]

4

2

1

5

3

6

8

7

4

2

5

6

1

8

3

7

4

2

6

5

1

3

8

7

Sgm (IQ)

(continued)

4

3

2

1

5

6

8

7

4

2

1

5

6

8

3

7

4

2

3

5

1

6

8

7

Sgm (Z)

11.5 Results of Ranking of Alternatives for Decision Matrix D0 231

3

4

8

1

2

3

6

8

5

1

4

2

A3

A4

A5

A6

A7

A8

5

6

7

7

Sum

A2

Max

A1

TOPSIS, L2

Alternatives

2

1

8

4

5

6

3

7

Vec

Normalization methods

Table 11.9 (continued)

4

3

5

2

1

6

8

7

dSum

4

2

3

5

1

6

8

7

IZ (Max,4)

4

2

3

5

1

6

8

7

IZ (Sum,4)

4

2

3

5

1

6

8

7

IZ (Vec,4)

4

2

3

5

1

6

8

7

IZ (dSum,4)

4

2

3

5

1

6

8

7

MaxMin

3

5

4

6

2

1

7

8

Z [0,1]

3

5

4

6

7

2

8

1

mIQR [0,1]

3

5

4

6

2

1

7

8

mMAD [0,1]

3

5

4

6

2

1

7

8

MS (Max,4)

3

5

4

6

2

1

7

8

MS (Sum,4)

3

5

4

6

2

1

7

8

MS (Vec,4)

3

5

4

6

2

1

7

8

MS (dSum,4)

4

3

2

5

1

6

8

7

PwL [0,1]

4

3

2

1

5

8

6

7

SSp [0,1]

4

3

2

5

1

8

6

7

Sgm [0,1]

4

3

2

1

5

6

8

7

Sgm (Z)

4

2

1

5

3

6

8

7

Sgm (IQ)

232 11 Comparative Results of Ranking of Alternatives Using. . .

3

8

1

5

6

4

2

A3

A4

A5

A6

A7

A8

8

6

3

1

5

4

2

A2

A3

A4

A5

A6

A7

A8

3

2

8

6

1

5

3

2

4

A3

A4

A5

A6

A7

A8

4

5

1

6

8

7

A2

7

2

4

1

5

3

6

8

7

2

4

6

1

5

8

3

7

Sum

A1

VIKOR

7

A1

GRAt

7

A1

Max

A2

GRA

Alternatives

4

2

3

5

1

6

8

7

2

4

1

5

3

6

8

7

2

4

6

1

5

8

3

7

Vec

Normalization methods

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

6

3

5

8

1

7

dSum

4

2

4

2

3

5

5

3

1

6

8

7

4

2

1

6

8

7

4

2

3

5

5

3

1

1

6

8

6

7

8

4

7

4

2

6

6

2

5

5

3

8

8

3

1

7

IZ (Sum,4)

1

7

IZ (Max,4)

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

6

5

3

8

1

7

IZ (Vec,4)

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

6

5

3

8

1

7

IZ (dSum,4)

4

2

3

5

1

6

8

7

4

2

3

5

1

6

8

7

4

2

6

5

3

8

1

7

MaxMin

3

5

4

1

2

6

7

8

3

5

4

6

2

7

8

1

3

5

4

6

8

2

7

1

Z [0,1]

3

5

4

1

2

6

7

8

3

5

4

6

2

7

8

1

3

5

4

6

8

2

7

1

mIQR [0,1]

3

5

4

1

2

6

7

8

3

5

4

6

2

7

8

1

3

5

4

6

8

2

7

1

mMAD [0,1]

Table 11.10 Ranking results (D0). Aggregation methods based on distances to a critical link

3

5

4

1

2

6

7

8

3

5

4

6

2

7

8

1

3

5

4

6

8

2

7

1

MS (Max,4)

3

5

4

1

2

6

7

8

3

5

4

6

2

7

8

1

3

5

4

6

8

2

7

1

MS (Sum,4)

3

5

4

1

2

6

7

8

3

5

4

6

2

7

8

1

3

5

4

6

8

2

7

1

MS (Vec,4)

3

5

4

1

2

6

7

8

3

5

4

6

2

7

8

1

3

5

4

6

8

2

7

1

MS (dSum,4)

4

3

2

5

1

6

8

7

4

3

2

5

1

6

8

7

4

2

5

6

3

1

8

7

PwL [0,1]

4

3

2

1

5

6

7

8

4

3

2

1

5

6

8

7

4

2

3

5

1

6

8

7

SSp [0,1]

4

3

2

5

1

6

7

8

4

3

2

5

1

6

8

7

4

2

3

5

1

6

8

7

Sgm [0,1]

4

3

2

1

5

8

6

7

4

2

3

5

1

6

8

7

4

2

6

5

3

1

8

7

Sgm (Z)

4

2

3

5

1

8

6

7

4

2

5

3

1

6

8

7

4

6

2

5

8

3

1

7

Sgm (IQ)

11.5 Results of Ranking of Alternatives for Decision Matrix D0 233

234 Table 11.11 Summary results of the ranking of alternatives (D0). Outranking methods: PROMETHEE & ORESTE

Table 11.12 Summary results of ranking alternatives (D0) based on 238 MCDM models

11

A1 A2 A3 A4 A5 A6 A7 A8

A1 A2 A3 A4 A5 A6 A7 A8

Comparative Results of Ranking of Alternatives Using. . . Rank I 0 0 0 0 0 0 7 0

II 0 0 0 0 0 0 0 7

Rank I II 17 21 0 2 2 28 0 0 0 0 0 16 192 30 27 141

III 1 0 0 0 0 6 0 0

III 14 16 4 6 2 147 11 38

IV 4 1 0 0 1 1 0 0

IV 120 29 18 0 39 11 5 16

V 1 0 4 0 2 0 0 0

V 28 8 12 9 135 43 0 3

VI 1 2 3 0 1 0 0 0

VI 24 35 85 50 17 20 0 7

VII 0 4 0 0 3 0 0 0

VII 14 109 44 19 45 1 0 6

VIII 0 0 0 7 0 0 0 0

VIII 0 39 45 154 0 0 0 0

Fig. 11.3 Histogram of the ranks of alternatives (D0). 238 MCDM models

The results of ranking of alternatives in different models for the D0 problem show a high degree of consistency. A high degree of consistency also occurs with Outranking methods (Table 11.11). The summary results of ranking alternatives (D0) using a population of 238 MCDM models are presented in Table 11.12. Figure 11.3 shows a histogram of the ranks of alternatives in the context of simple statistics and statistics of grouped data.

11.5

Results of Ranking of Alternatives for Decision Matrix D0

235

The summary results demonstrate that, with a high degree of (statistical) confidence, preferences are distributed as follows: A7, A8, A6, A1,. . .

11.5.1

Borda Voting Principles

Since several alternatives in different models have different ranks, it is rational to take this fact into account by means of additional points. This procedure is consistent with Borda’s voting principles [15, 16]. The elector is 1 of 238 MCDM models. How many points for each position of the candidate should be added is an informal procedure that has many possible options. In the example shown, “Tournamentstyle counting” is used: the alternative gets m-1 points for the first preference, m-2 points for the second preference, and so on. Table 11.13 presents the Borda count for each group of models by aggregation method. Only for two aggregation methods (marked in the table with color) the result of the second rank differs from the others. This indicates a weak sensitivity of the ranking of alternatives from the choice of the normalization method for two groups Table 11.13 Borda count by aggregation method. Ranking alternatives for (D0) based on 238 MCDM models Rank I II III IV V VI VII VIII Rank

SAW A7 147 A8 114 A6 107 72 A1 A5 66 51 A3 A2 23 8 A4 TOPSIS, L1

WPM A7 147 A8 118 A6 107 A1 70 A5 61 A3 41 A2 27 A4 17 TOPSIS, L2

WASPAS A7 147 A8 116 A6 107 A1 70 A5 66 A3 47 A2 26 9 A4 GRA

MABAC A7 147 A8 114 A6 107 A1 72 A5 66 A3 51 A2 23 8 A4 GRAt

CODAS A7 145 A1 100 A8 98 A3 73 A5 69 A6 66 A2 21 A4 16 VIKOR

I II III IV V VI VII VIII Total:

A7 A8 A6 A1 A5 A2 A3 A4

A7 A8 A6 A1 A2 A5 A3 A4

138 116 88 75 53 52 37 29

A7 A1 A8 A3 A6 A5 A2 A4

140 119 101 58 55 51 47 17

A7 A8 A1 A6 A5 A2 A3 A4

133 126 99 91 48 42 32 17

A7 A8 A6 A1 A5 A2 A3 A4

A6 1011

A1 915

A5 635

A3 506

A2 399

A4 176

A7 1550

138 120 93 81 52 46 37 21 A8 1276

138 133 107 75 51 46 24 14

COPRAS A7 130 A8 120 A6 83 A1 82 A3 55 A5 53 A2 45 A4 20 Outranking: PROMETHEE & ORESTE A7 56 A8 49 A6 41 A1 33 A3 25 A5 22 A2 19 A4 7

236

11

Comparative Results of Ranking of Alternatives Using. . .

of aggregation methods (11 methods). For a number of aggregation methods, the Borda count between alternatives of ranks 1, 2, and 3 does not differ as significantly as in the case of simple statistics. It follows from this that all three alternatives A7, A8, A6 are acceptable and the final decision is up to the decision maker.

11.5.2

Distinguishability of Ratings

You can recognize a situation with a high decision sensitivity using the relative performance indicator of alternatives: dQp =

Qp - Qpþ1  100%, p = 1, . . . , m - 1, rngðQÞ

ð11:2Þ

where Qp is the value of the performance indicator corresponding to the pth rank of alternative, rng(Q) = Q1-Qm. The dQ indicator is the relative (given in the Q scale) gain or loss of the performance score for the ordered list of alternatives. We believe that two alternatives whose relative growth dQ differs less than the value of a given a priori error should be considered indistinguishable. For the example above, additional information for ranking analysis is provided by the relative ranking gap dQ presented in Table 11.14. The results demonstrate a significant difference in the ratings of alternatives of ranks 1, 2, and 3, which increases the confidence of recommendations for decision makers.

11.6

Results of Ranking of Alternatives for Decision Matrix D1

The results of ranking of alternatives for the decision matrix D1 within 231 different ranking models, combining 11 aggregation methods and 21 different normalization methods, all other things being equal, are presented in Tables 11.15, 11.16 and 11.17. To compare the results, the ranking was also performed within seven outranking models that do not use data normalization. The use of a large number of models or a computational experiment makes it possible to establish the sensitivity of the multi-criteria choice problem to the normalization procedure of decision matrix. The results of ranking alternatives in different models for the D1 problem show a high degree of consistency. A high degree of consistency also occurs with outranking methods (Table 11.18).

11.6

237

Results of Ranking of Alternatives for Decision Matrix D1

Table 11.14 Relative rating gap dQ for various normalization methods (fragment) Normalization methods dQi Max Sum Vec dSum SAW Relative increase dQ, % 1/2 37.8 23.4 24.7 26.4 2/3 4.0 13.9 13.1 22.2 3/4 0.1 1.4 1.7 2.6 4/5 6.1 3.3 3.9 21.2 5/6 8.1 2.4 0.7 8.1 6/7 1.1 6.3 7.2 13.3 7/8 42.7 49.3 48.7 6.1 TOPSIS, L2 1/2 19.0 7.6 8.5 13.6 2/3 11.3 8.9 9.2 27.6 3/4 1.5 8.7 8.2 5.5 4/5 2.8 3.3 2.9 9.7 5/6 13.6 14.1 14.3 26.4 6/7 1.5 12.5 11.5 12.6 7/8 50.3 44.9 45.3 4.6

IZ(Max,4)

IZ(Sum,4)

IZ(Vec,4)

IZ(dSum,4)

33.1 18.9 11.2 12.4 10.3 4.5 9.7

33.1 18.9 11.2 12.4 10.3 4.5 9.7

33.1 18.9 11.2 12.4 10.3 4.5 9.7

33.1 18.9 11.2 12.4 10.3 4.5 9.7

28.3 19.7 18.3 9.9 8.3 6.1 9.3

28.3 19.7 18.3 9.9 8.3 6.1 9.3

28.3 19.7 18.3 9.9 8.3 6.1 9.3

28.3 19.7 18.3 9.9 8.3 6.1 9.3

The summary results of ranking of alternatives (D1) using a population of 238 MCDM models are presented in Table 11.19. Figure 11.4 shows a histogram of the ranks of alternatives in the context of simple statistics and statistics of grouped data. A simple statistic ranks the alternatives in the following order: A6, A4, A5, A2,. . . However, grouping by the number of 1-2 places and of 1-2-3 places ranked alternatives in the following order: A4, A6, A2, A1,. . .

11.6.1

Borda Count

The application of the Borda voting procedure allows for a redistribution of priorities. Table 11.20 presents the Borda count for each group of models by aggregation method. The Borda count ranks the alternatives in the following order: A4, A6, A2, A1,. . ., which is the same when using grouping by number of 1-2 places and 1-2-3 places. This means that the ratings of the alternatives are hardly distinguishable and there is a high sensitivity of the rating to the choice of the normalization method. Three alternatives A4, A6, A2 are acceptable for recommendation and final decision of decision maker.

2

3

7

6

2

3

5

A4

A5

A6

A7

A8

2

3

5

8

7

6

2

3

5

A3

A4

A5

A6

A7

A8

6

2

8

7

6

2

3

5

A3

A4

A5

A6

A7

A8

4

1

8

A1

A2

A3

MABAC

7

1

8

4

1

5

3

8

1

4

A2

4

7

8

A1

WASPAS

6

1

1

4

A2

4

5

7

8

8

4

1

5

3

2

6

7

8

1

4

5

3

2

6

7

8

1

4

5

3

2

6

7

8

4

1

Rank

Vec

6

4

2

5

3

8

7

1

6

4

2

5

3

8

7

1

6

4

2

5

3

8

7

1

6

4

2

dSum

2

4

6

5

3

8

7

1

2

4

6

5

3

8

7

1

4

2

6

5

3

8

7

1

2

4

6

IZ (Max,4)

2

4

6

5

3

8

7

1

2

4

6

5

3

8

7

1

4

2

6

5

3

8

7

1

2

4

6

IZ (Sum,4)

2

4

6

5

3

8

7

1

2

4

6

5

3

8

7

1

4

2

6

5

3

8

7

1

2

4

6

IZ (Vec,4)

2

4

6

5

3

8

7

1

2

4

6

5

3

8

7

1

4

2

6

5

3

8

7

1

2

4

6

IZ (dSum,4)

2

4

6

5

3

8

7

1

4

2

6

5

3

8

7

1

4

2

6

5

3

8

7

1

2

4

6

MaxMin

2

4

6

5

3

8

1

7

2

4

6

5

3

8

1

7

2

4

6

5

3

8

1

7

2

4

6

Z [0,1]

7

4

6

5

3

1

8

2

7

4

6

5

3

1

8

2

7

4

6

5

3

1

8

2

7

4

6

mIQR [0,1]

2

6

4

5

3

8

1

7

2

4

6

5

3

8

1

7

2

4

6

5

3

8

1

7

2

6

4

mMAD [0,1]

2

4

6

5

3

8

1

7

2

4

6

5

3

8

1

7

2

4

6

5

3

8

1

7

2

4

6

MS (Max,4)

2

4

6

5

3

8

1

7

2

4

6

5

3

8

1

7

2

4

6

5

3

8

1

7

2

4

6

MS (Sum,4)

2

4

6

5

3

8

1

7

2

4

6

5

3

8

1

7

2

4

6

5

3

8

1

7

2

4

6

MS (Vec,4)

2

4

6

5

3

8

1

7

2

4

6

5

3

8

1

7

2

4

6

5

3

8

1

7

2

4

6

MS (dSum,4)

7

6

4

5

3

8

1

2

7

4

6

5

3

8

1

2

7

4

6

5

3

8

1

2

7

6

4

PwL [0,1]

2

7

4

5

3

8

1

2

6

7

4

5

3

8

1

2

6

7

4

5

3

8

1

6

2

7

4

SSp [0,1]

2

7

4

5

3

8

1

6

2

7

4

5

3

8

1

6

2

7

4

5

3

8

1

6

2

7

4

Sgm [0,1]

6

7

4

5

3

8

1

2

6

7

4

5

3

8

1

2

6

7

4

5

3

8

1

2

6

7

4

Sgm (Z)

7

6

4

5

3

1

8

2

7

6

4

5

3

1

8

2

7

6

4

5

3

1

8

2

7

6

4

Sgm (IQ)

11

A1

WPM

6

8

A3

4

1

A2

1

Sum

4

Max

A1

SAW

Alternatives

Normalization methods

Table 11.15 Ranking results (D1). Aggregation methods based on additivity

238 Comparative Results of Ranking of Alternatives Using. . .

7

6

2

3

5

A4

A5

A6

A7

A8

5

3

2

6

7

5

3

2

6

7

5

3

8

7

1

5

3

8

7

1

5

3

8

7

1

5

3

8

7

1

5

3

8

7

1

5

3

8

7

1

5

3

8

1

7

5

3

1

8

2

5

3

8

1

7

5

3

8

1

7

5

3

8

1

7

5

3

8

1

7

5

3

8

1

7

5

3

8

1

2

5

3

8

1

6

5

3

8

1

6

5

3

8

1

2

5

3

1

8

2

11.6 Results of Ranking of Alternatives for Decision Matrix D1 239

2

3

5

4

7

6

2

3

5

A3

A4

A5

A6

A7

A8

7

2

8

6

7

2

3

5

A3

A4

A5

A6

A7

A8

3

5

1

8

7

6

2

3

5

A3

A4

A5

A6

A7

A8

2

6

7

8

4

4

A2

1

5

3

8

A1

TOPSIS, L1

6

4

A2

4

1

A1

1

6

4

5

3

2

6

7

8

4

1

5

3

2

7

6

8

4

1

5

3

2

6

7

4

8

1

Vec

5

3

8

7

1

6

4

2

5

3

8

7

1

6

4

2

3

5

8

7

6

1

2

4

dSum

5

3

8

7

1

2

4

6

5

3

8

7

1

2

4

6

5

3

7

8

1

2

6

4

IZ (Max,4)

5

3

8

7

1

2

4

6

5

3

8

7

1

2

4

6

5

3

7

8

1

2

6

4

IZ (Sum,4)

5

3

8

7

1

2

4

6

5

3

8

7

1

2

4

6

5

3

7

8

1

2

6

4

IZ (Vec,4)

5

3

8

7

1

2

4

6

5

3

8

7

1

2

4

6

5

3

7

8

1

2

6

4

IZ (dSum,4)

5

3

8

7

1

2

4

6

5

3

8

1

7

4

2

6

5

3

7

8

1

2

6

4

MaxMin

8

6

7

4

3

1

2

5

8

6

7

4

3

1

2

5

5

3

8

2

1

7

6

4

Z [0,1]

6

8

7

4

3

1

2

5

6

8

7

3

4

1

2

5

5

3

1

2

8

7

4

6

mIQR [0,1]

6

8

7

4

3

1

2

5

8

6

7

3

4

1

2

5

5

3

1

8

2

7

6

4

mMAD [0,1]

8

6

7

4

3

1

2

5

8

6

7

4

3

1

2

5

5

3

2

8

1

7

6

4

MS (Max,4)

8

6

7

4

3

1

2

5

8

6

7

4

3

1

2

5

5

3

2

8

1

7

6

4

MS (Sum,4)

8

6

7

4

3

1

2

5

8

6

7

4

3

1

2

5

5

3

2

8

1

7

6

4

MS (Vec,4)

8

6

7

4

3

1

2

5

8

6

7

4

3

1

2

5

5

3

8

1

2

7

6

4

MS (dSum,4)

5

3

8

1

2

7

6

4

5

3

8

1

2

7

4

6

5

3

8

2

7

6

1

4

PwL [0,1]

5

3

8

1

6

2

7

4

5

3

1

8

2

6

7

4

5

3

8

1

6

2

7

4

SSp [0,1]

5

3

8

1

6

2

7

4

5

3

1

8

2

6

7

4

5

3

8

1

6

2

7

4

Sgm [0,1]

5

3

8

1

2

6

7

4

5

3

1

8

2

7

6

4

5

3

8

2

1

6

7

4

Sgm (Z)

5

3

1

8

2

7

6

4

5

3

1

8

2

7

4

6

5

3

2

1

8

7

6

4

Sgm (IQ)

11

COPRAS

7

8

8

1

1

Sum

A2

Max

A1

CODAS

Alternatives

Normalization methods

Table 11.16 Ranking results (D1). Aggregation methods based on distances to a critical link

240 Comparative Results of Ranking of Alternatives Using. . .

2

3

1

7

6

2

3

5

A3

A4

A5

A6

A7

A8

5

6

7

4

8

8

A2

1

4

A1

TOPSIS, L2

5

3

2

6

7

4

1

8

3

5

8

7

6

4

1

2

5

3

8

7

1

4

2

6

5

3

8

7

1

4

2

6

5

3

8

7

1

4

2

6

5

3

8

7

1

4

2

6

5

3

8

7

1

4

2

6

6

8

7

4

3

1

2

5

6

7

8

3

4

1

2

5

6

8

7

3

4

1

2

5

6

8

7

4

3

1

2

5

6

8

7

4

3

1

2

5

6

8

7

4

3

1

2

5

6

8

7

4

3

1

2

5

5

3

8

1

2

7

6

4

5

3

8

1

6

2

7

4

5

3

8

1

6

2

7

4

5

3

8

1

2

6

7

4

5

3

1

2

8

6

7

4

11.6 Results of Ranking of Alternatives for Decision Matrix D1 241

3

8

6

2

7

3

5

A3

A4

A5

A6

A7

A8

3

6

8

7

2

3

5

A3

A4

A5

A6

A7

A8

7

4

3

2

1

8

7

4

3

5

A3

A4

A5

A6

A7

A8

5

8

1

2

6

A2

6

5

7

6

8

A1

VIKOR

2

1

A2

1

4

A1

4

5

7

6

8

5

3

4

7

8

1

2

6

5

3

2

7

6

8

1

4

5

3

7

2

6

8

4

1

Vec

5

3

4

7

8

1

2

6

5

3

7

8

1

4

2

6

5

3

7

8

1

6

2

4

dSum

5

3

4

7

8

1

2

6

5

3

7

8

1

4

2

6

5

3

7

8

1

2

4

6

IZ (Max,4)

5

3

4

7

8

1

2

6

5

3

7

8

1

4

2

6

5

3

7

8

1

2

4

6

IZ (Sum,4)

5

3

4

7

8

1

2

6

5

3

7

8

1

4

2

6

5

3

7

8

1

2

4

6

IZ (Vec,4)

5

3

4

7

8

1

2

6

5

3

7

8

1

4

2

6

5

3

7

8

1

2

4

6

IZ (dSum,4)

5

3

4

7

8

1

2

6

5

3

7

8

1

4

2

6

5

3

7

8

1

2

4

6

MaxMin

6

7

8

4

1

3

5

2

6

7

8

4

1

3

5

2

7

6

8

3

1

4

2

5

Z [0,1]

6

7

8

4

1

3

5

2

6

8

7

4

3

1

2

5

6

7

8

3

4

1

2

5

mIQR [0,1]

6

7

8

4

1

3

5

2

6

7

8

4

1

3

5

2

7

6

8

3

1

4

2

5

mMAD [0,1]

6

7

8

4

1

3

5

2

6

7

8

4

1

3

5

2

7

6

8

3

1

4

2

5

MS (Max,4)

6

7

8

4

1

3

5

2

6

7

8

4

1

3

5

2

7

6

8

3

1

4

2

5

MS (Sum,4)

6

7

8

4

1

3

5

2

6

7

8

4

1

3

5

2

7

6

8

3

1

4

2

5

MS (Vec,4)

6

7

8

4

1

3

5

2

6

7

8

4

1

3

5

2

7

6

8

3

1

4

2

5

MS (dSum,4)

5

3

1

8

7

2

4

6

5

3

8

1

2

7

4

6

5

3

8

2

7

6

4

1

PwL [0,1]

5

3

8

1

2

7

4

6

5

3

8

1

2

6

7

4

5

3

8

2

1

6

7

4

SSp [0,1]

5

3

8

1

7

4

2

6

5

3

8

1

2

6

7

4

5

3

8

1

2

6

7

4

Sgm [0,1]

5

3

1

8

2

7

4

6

5

3

8

1

2

7

6

4

5

3

8

2

7

1

6

4

Sgm (Z)

5

3

1

8

7

2

4

6

5

3

8

1

2

7

6

4

5

3

7

8

2

1

6

4

Sgm (IQ)

11

GRAt

2

4

4

1

1

Sum

A2

Max

A1

GRA

Alternatives

Normalization methods

Table 11.17 Ranking results (D1). Aggregation methods based on distances to a critical link

242 Comparative Results of Ranking of Alternatives Using. . .

11.6

Results of Ranking of Alternatives for Decision Matrix D1

Table 11.18 Summary results of the ranking of alternatives (D1).Outranking methods: PROMETHEE & ORESTE

Table 11.19 Summary results of ranking of alternatives (D1) based on 238 MCDM models

243

A1 A2 A3 A4 A5 A6 A7 A8

Rank I 0 0 0 6 0 1 0 0

II 0 0 0 1 0 6 0 0

III 0 2 0 0 0 0 1 4

IV 1 4 0 0 0 0 2 0

V 6 1 0 0 0 0 0 0

VI 0 0 0 0 0 0 4 3

VII 0 0 7 0 0 0 0 0

VIII 0 0 0 0 7 0 0 0

A1 A2 A3 A4 A5 A6 A7 A8

Rank I 17 20 0 78 29 93 0 1

II 15 59 0 84 13 34 28 5

III 36 73 13 31 0 26 32 27

IV 83 43 18 5 0 23 53 13

V 68 11 11 31 0 20 59 38

VI 19 32 0 9 0 0 45 133

VII 0 0 194 0 2 17 15 10

VIII 0 0 2 0 194 25 6 11

Fig. 11.4 Histogram of the ranks of alternatives (D1). 238 MCDM models

11.6.2 Distinguishability of Ratings For the example above, the relative ratings gap dQ presented in Table 11.21 provides additional information.

244

11

Comparative Results of Ranking of Alternatives Using. . .

Table 11.20 Borda count by aggregation method. Ranking alternatives for (D1) based on 238 MCDM models Rank I II III IV V VI VII VIII Rank

SAW A4 133 A6 122 A2 94 87 A7 A1 78 53 A8 A3 21 0 A5 TOPSIS, L1

WPM A4 128 A6 125 A2 98 A7 87 A1 76 A8 53 A3 21 0 A5 TOPSIS, L2

WASPAS A4 132 A6 125 A2 94 A7 87 A1 76 A8 53 A3 21 0 A5 GRA

MABAC A4 133 A6 122 A2 94 A7 87 A1 78 A8 53 A3 21 0 A5 GRAt

CODAS A4 140 A6 111 A1 88 A7 86 A2 75 A8 67 A3 20 A5 1 VIKOR

I II III IV V VI VII VIII Total:

A4 A2 A1 A6 A7 A5 A3 A8

A2 A4 A1 A6 A7 A8 A5 A3

106 105 93 72 72 51 50 39

A4 A1 A2 A6 A8 A5 A7 A3

123 98 98 85 58 49 42 35

A2 A4 A1 A6 A7 A8 A3 A5

110 106 86 84 56 55 48 43

A2 A6 A1 A4 A8 A7 A3 A5

A2 1099

A1 941

A7 801

A8 588

A3 357

A5 283

A4 1288

111 102 93 79 72 49 42 40 A6 1111

127 98 85 68 63 56 49 42

COPRAS A4 109 A2 101 A1 90 A6 88 A7 69 A5 49 A8 42 A3 40 Outranking: PROMETHEE & ORESTE A4 55 A6 50 A2 36 A8 33 A1 29 A7 28 A3 14 A5 7

Table 11.21 Relative rating gap dQ for various normalization methods (fragment). Decision matrix D1 Normalization methods Max Sum Vec dSum dQi SAW Relative increase dQ, % 1/2 2.4 2.0 1.4 3.2 2/3 4.6 3.4 3.8 3.3 3/4 7.6 9.8 9.6 3.6 4/5 0.4 0.6 0.6 14.2 5/6 23.5 30.3 29.6 13.2 6/7 24.8 17.4 18.2 33.8 7/8 36.5 36.5 36.7 28.8 TOPSIS, L2 1/2 3.2 0.1 0.2 16.9 2/3 1.1 1.4 0.7 0.8 3/4 8.1 11.0 11.2 3.1 4/5 6.5 3.7 4.0 17.4 5/6 46.7 55.7 54.9 28.6 6/7 10.1 2.4 3.2 31.6 7/8 24.4 25.8 25.8 1.6

IZ(Max,4)

IZ(Sum,4)

IZ(Vec,4)

IZ(dSum,4)

4.2 1.1 8.2 4.3 4.6 32.6 44.9

4.2 1.1 8.2 4.3 4.6 32.6 44.9

4.2 1.1 8.2 4.3 4.6 32.6 44.9

4.2 1.1 8.2 4.3 4.6 32.6 44.9

6.3 3.5 8.9 0.8 8.1 33.6 38.7

6.3 3.5 8.9 0.8 8.1 33.6 38.7

6.3 3.5 8.9 0.8 8.1 33.6 38.7

6.3 3.5 8.9 0.8 8.1 33.6 38.7

References

245

The results demonstrate a weak distinguishability of the ratings of alternatives of ranks 1, 2, and 3, which is reflected in the sensitivity of the rating from the choice of the normalization method.

11.7

Conclusions

A simple procedure for ranking of alternatives on a discrete set is not as simple as it might seem. The choice of the MCDM model is not formalized. The selection criteria are not defined. The computational experiment presented in this chapter is very productive. An analysis of the ranks of alternatives for various normalization methods shows the influence of the normalization method on the ranking result. The analysis allows to establish the degree of influence of the choice of the normalization method on the rating and to assess its sensitivity and distinguishability of alternatives. The ranking results for linear normalization methods are on average the same. Having a statistical picture of the rank evaluation, decision-making can be carried out not only according to the statistics of alternatives with rank 1, but also using the statistics of alternatives with the highest total scores of ranks 1 and 2, or ranks 1, 2, and 3. This is relevant when the difference in the values of the efficiency indicator of alternatives Qi with the first three ranks is small or the performance indicators are not distinguishable.

References 1. Hwang, C. L., & Yoon, K. (1981). Multiple attributes decision making: METHODS and applications. A state-of-the-art survey. Springer. 2. Chakraborty, S., & Zavadskas, E. K. (2014). Applications of WASPAS method as a multicriteria decision-making tool. Informatica, 25(1), 1–20. https://doi.org/10.15388/Informatica. 2014.01 3. Pamučar, D., & Ćirović, G. (2015). The selection of transport and handling resources in logistics centres using Multi-Attributive Border Approximation area Comparison (MABAC). Expert Systems with Applications, 42, 3016–3028. 4. Ghorabaee, K. M., Zavadskas, E. K., Turskis, Z., & Antucheviciene, J. (2016). A new COmbinative Distance-based ASsessment (CODAS) method for Multi Criteria DecisionMaking. Economic Computation & Economic Cybernetics Studies & Research, 50(3), 25–44. 5. Ustinovichius, L., Zavadskas, E. K., & Podvezko, V. (2007). Application of a quantitative multiple criteria decision making (MCDM-1) approach to the analysis of investments in construction. Control and Cybernetics, 36(1), 251–268. 6. Archana, M., & Sujatha, V. (2012). Application of fuzzy MOORA and GRA in Multi-criterion Decision Making problems. International Journal of Computer Applications, 53(9), 46–50. 7. Wang, Q. B., & Peng, A. H. (2010). Developing MCDM approach based on GRA and TOPSIS. In Applied mechanics and materials (Vols. 34–35, pp. 1931–1935). Trans Tech Publications. https://doi.org/10.4028/www.scientific.net/amm.34-35.1931 8. Opricovic, S. (1998). Multicriteria optimization of civil engineering systems. PhD thesis, Faculty of Civil Engineering, Belgrade, 2(1), 5–21.

246

11

Comparative Results of Ranking of Alternatives Using. . .

9. Brans, J. P., Mareschal, B., & Vincke, P. (1986). How to select and how to rank projects: The PROMETHEE method. European Journal of Operational Research, 24(2), 228–238. 10. Pastijn, H., & Leysen, J. (1989). Constructing an outranking relation with ORESTE. Mathematical and Computer Modelling, 12, 1255–1268. 11. Wang, X. D., Gou, X. J., & Xu, Z. S. (2020). Assessment of traffic congestion with ORESTE method under double hierarchy hesitant fuzzy linguistic environment. Applied Soft Computing, 86, 105864. 12. Mukhametzyanov, I. Z. (2023). Elimination of the domain’s displacement of the normalized values in MCDM tasks: The IZ-method. International Journal of Information Technology and Decision Making. https://doi.org/10.1142/S0219622023500037 13. Mukhametzyanov, I. Z. (2023). On the conformity of scales of multidimensional normalization: An application for the problems of decision making. Decision Making: Applications in Management and Engineering. https://doi.org/10.31181/dmame05012023i 14. Mukhametzyanov, I. Z. (2020). ReS-algorithm for converting normalized values of cost criteria into benefit criteria in MCDM tasks. International Journal of Information Technology and Decision Making, 19(5), 1389–1423. https://doi.org/10.1142/S0219622020500327 15. Borda_count. (2022, May 28). In Wikipedia. https://en.wikipedia.org/wiki/Borda_count 16. Lamboray, C. (2007). A comparison between the prudent order and the ranking obtained with Borda’s, Copeland’s, Slater’s and Kemeny’s rules. Mathematical Social Sciences, 54(1), 1–16.

Chapter 12

Significant Difference of the Performance Indicator of Alternatives

Abstract The chapter discusses the problem of distinguishability of alternatives in a situation where the estimates of the alternatives are approximately equal, the estimates are sensitive to the initial data, to the choice of normalization methods and other model parameters. Indicators for comparing ratings are proposed, and numerical algorithms for estimating the magnitude of the relative error are proposed, which determine a significant difference in ratings with variations in the initial data and variations in normalization methods. Based on this analysis, it is possible to determine the aggregation methods that have the best ranking resolution. Keywords MCDM rank model · Multivariate normalization · The significant difference in assessment score of alternatives · VIKOR method and difference in ratings · Ranking using distinguishability of the rating

12.1

Relative Difference in the Performance Indicator of Alternatives

To determine the priority of alternatives, it is not enough to compare the absolute values of the efficiency indicator Qi. Attribute values may not be accurate. This is due to many factors. For example, an attribute can be approximated, the source of the data may be unreliable, the measurement was made with an error, the measurements for different alternatives were carried out using different methods, some of the attributes may be random values or determined by interval values, etc. Therefore, in fact, the values of the efficiency indicator are determined with an error Qi±ΔQi and the distinguishability of alternatives is determined by the error ΔQi. As known, that the error of the sum of linear quantities is equal to the sum of the errors of the terms:

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3_12

247

248

12

Significant Difference of the Performance Indicator of Alternatives

∂Qi Δr ij = ∂r j

dQi = j

ωj  Δr ij :

ð12:1Þ

j

In accordance with this formula, the resulting error is estimated as: max Δr ij ≤ ΔQi ≤ j

Δr ij :

ð12:2Þ

j

In many cases, it is not possible to estimate the error. Then, it can be set “a’priori.” If, for example, the error in the assessment of an attribute of alternative is 5% of its value, then the error in the alternative’s performance indicator will be at least 5%. A large proportion of rank-based MCDM methods perform ranking by comparing the absolute values of performance indicators Qi. However, the normalization scales of various attributes are not equivalent, and the scales for changing values can differ significantly. By analogy with the disposition of attributes, it is advisable to consider the relative deviations of the values of performance indicators. Moreover, relative deviations are best determined for an ordered series of values, from best to worst. For most ranking methods, the best efficiency score is the maximum. In some cases where the performance measure for an aggregation method is defined, as a deviation from an ideal or target value, such as VIKOR, the best value for the performance measure is the minimum. For practical reasons, in order to compare performance indicators obtained for different aggregation methods, the direction of optimization can be inverted using the ReS-algorithm. As shown in Chap. 5, the ReS-algorithm of inversion preserves dispositions of values. Thus, let’s order the values of Qi in descending order of values (if necessary, perform the inversion using the ReS-algorithm) and determine the relative differences in the efficiency indicator reduced to the range of its change: dQi =

ðQi - Qi - 1 Þ , i = 2, . . . , m, rngðQÞ

ð12:3Þ

rngðQÞ = max Qi - min Qi = Q1 - Qm ,

ð12:4Þ

Q1 ≥ Q2 ≥ . . . ≥ Qm :

ð12:5Þ

i

i

The ReS-algorithm applied to the inversion of Qi values has the following form: Q0i = ReSðQÞ = - Qi þ max Qi þ min Qi : i

i

ð12:6Þ

The intensity and relative dispositions of the performance indicator are indicators for evaluating the distinguishability of the ranking results. Thus, the indicator dQ of the performance indicator Q allows you to set the priority of alternatives (significant difference) in the ranking. To do this, it is

12.2

Ranking Algorithm Using Distinguishability Criteria

249

necessary to determine the critical value dQc. Then, under the condition dQ2 > dQc (or dQ3 > dQc), the alternatives of first and second ranks (second and third ranks) are distinguishable, otherwise they are not distinguishable. As part of the computational experiment performed in Chap. 11 for ranking a problem with a decision matrix D1 using 231 rank MCDM models, it was calculated that for 24% of cases the relative difference in the rating of alternatives of the first and second ranks does not exceed 3%, for 39% of the models this difference does not exceed 5%, for 68% of the models this difference does not exceed 10%, and for 82% of the models this difference does not exceed 15%. This means that accounting for error is a necessary component in solving decision-making problems in cases where ratings are sensitive to initial data or to the choice of model components (methods). Thus, it is necessary: 1. determine the error estimate for a significant difference in the ratings of the two alternatives, 2. indicate the ranking algorithm using the difference criterion. First of all, we will indicate the solution for point 2), assuming that the relative error is set “a’priori.”

12.2

Ranking Algorithm Using Distinguishability Criteria

For compromise solutions that include three alternatives of the first three ranks based on the ordering of Qi values, five groups of solutions are possible according to the indistinguishability criterion [1]: 1. 2. 3. 4. 5.

I, II, III—alternatives of the first three ranks are significantly distinguishable, I ≈ II, II ≠ III—alternatives of ranks I and II are indistinguishable, I ≠ II, II ≈ III—alternatives of II and III ranks are indistinguishable, I ≈ II, II ≈ III, I ≈ III—alternatives of I, II, and III ranks are indistinguishable, I ≈ II, II ≈ III, I ≠ III—alternatives of I, II and II, III ranks are indistinguishable.

We assign the alternatives with the first three ratings (Q1 > Q2 > Q3) to classes in accordance with the rating distinguishability criterion (dQc) in an ordered list as follows: Group 1: dQ2 > dQc and dQ3 > dQc Group 2: dQ2 ≤ dQc and dQ3 > dQc Group 3: dQ2 > dQc and dQ3 ≤ dQc Group 4: dQ2 ≤ dQc and dQ2 + dQ3 ≤ dQc Group 5: else if If it is necessary to use more than three alternatives for the analysis of distinguishability, then the number of classes will increase. At the same time, the

250

12

Significant Difference of the Performance Indicator of Alternatives

algorithm for assigning the result to a certain class remains the same—the combinatorial method.

12.3

Numerical Example of the Ranking of Alternatives, Taking into Account the Criterion of Distinguishability

Table 12.1 presents the decision matrix D1 generated for the numerical example (see Chap. 11) with a high rating sensitivity (TOPSIS) to the choice of normalization method (Max, Sum, Vec, dSum, Max-Min). The ranking of alternatives was carried out using 11 different methods: SAW, WPM, WAPRAS, MABAC, CODAS, COPRAS, TOPSIS(L1), TOPSIS(L2), GRA, GRAt, VIKOR [2–9]. All algorithms are also described in detail in Chap. 2. To compare the results, the ranking was also performed within seven outranking models [10–12] that do not use data normalization—these are two methods: PROMETHEEII with four options for choosing a preference function (V-Shape; Linear; Gauss and Linear-Gauss combination) and the ORESTE method with three distance metric options (L1, L2, L1). MCDM models use 21 different normalization methods in five groups: 1. 2. 3. 4. 5.

Max, Sum, Vec, dSum, Max-Min, Z[0,1], mIQR[0,1], mMAD[0,1], IZ(Max,4), IZ(Sum,4), IZ(Vec,4), IZ(dSum,4), MS(Max,4), MS(Sum,4), MS(Vec,4), MS(dSum,4), PwL[0,1], SSp[0,1], Sgm[0,1], Sgm(Z), Sgm(IQR).

The first group has four linear methods Max, Sum, Vec, dSum without displacement with a range of values [0, 1]. The second group has four linear methods with a displacement: Max-Min, Z-score and mIQR, mMAD analogs with a transformed range [0, 1]. In the third and fourth groups, respectively, the IZ and MS transformation of the domains of normalized values are used, which are determined by the normalization methods Max, Sum, Vec, dSum. The boundaries of the transformation Table 12.1 Decision matrix D1 [8×5] Benefit(+)/cost(–) Alternatives A1 A2 A3 A4 A5 A6 A7 A8

Criteria C1 + 4728.0 6081.2 5543.9 5888.1 4552.0 4962.8 5565.2 6257.4

C2 + 81.4 84.5 82.4 71.2 79.7 83.3 72.2 73.1

C3 – 596.0 567.3 567.2 558.9 630.1 533.7 546.4 558.9

C4 + 172.5 157.9 136.1 176.5 173.3 155.6 168.3 145.5

C5 – 1148.3 2389.4 2217.9 1496.6 2675.4 1550.7 1588.6 1372.6

12.3

Numerical Example of the Ranking of Alternatives, Taking into Account. . .

251

Fig. 12.1 Histogram of the ranks of alternatives (D1). 231 MCDM models. dQc = 5%

[I, Z] are defined as follows: I=median(rjmin), Z = 1 [13, 14]. The fifth group uses 5 non-linear normalization methods with a range of [0, 1] using the weakeningboosting technique of normalized values. For all normalization methods, if the aggregation method involves the inversion of cost criteria attributes into benefit criteria, the ReS-algorithm was used [15]. Criteria weights are the same for all calculation cases. Figure 12.1 shows a histogram of the ranks of alternatives, taking into account the grouping of data in accordance with the a priori distinguishability of ratings dQc = 5%. Preferences are distributed as follows: A6, A2, A5, A4,. . ., in contrast to the results of simple statistics (Chap. 11): A6, A4, A5, A2,. . . Numerical analysis shows that about 2/3 of all considered MCDM design options result in compromise solutions.

252

12.4

12

Significant Difference of the Performance Indicator of Alternatives

Assessing the Significance of the Difference in the Ratings of Alternatives in the VIKOR Method

The idea of using relative scores and evaluating the significance of differences when ranking alternatives is not new. One popular aggregation method, VIKOR [9], described in Chap. 2, performs a two-stage analysis of significant difference. The result of the VIKOR procedure is three rating lists S, R, and Q. Alternatives are evaluated by sorting the values of S, R, and Q according to the criterion of the minimum value. As a compromise solution, alternative A1 is proposed, the performance indicator of alternatives Q of which has the lowest value and if the following two conditions are met: 1. “ acceptable advantage”: Q(A2) - Q(A1) ≥ 1/(m–1), where A2 is an alternative to the second position in the Q-rating list. 2. “acceptable decision stability”: alternative A1 should also be best scored on S or/and R. If one of the conditions 1 or 2 is not met, then a set of compromise solutions is proposed, which consists of: – the alternatives A1 and A2 if condition 1 is true and condition 2 is false, or – the set of alternatives {A1, A2 ,. . ., Ak} if condition 1 is false; being k the position in the ranking of the alternative Ak verifying Q(Ak) – Q(A1) Qq => Ap > Aq) with a comparison of their significant difference based on interval estimates of the true value. Let us determine the distinguishability of alternatives using interval estimates of the true value of the alternatives’ performance indicators Q. Definition. For rank-based MCDM methods, we define two alternatives Ap and Aq as distinguishable if the interval estimates of the true values of performance indicators do not overlap: Qp - ΔQp ; Qp þ ΔQp \ Qq - ΔQq ; Qq þ ΔQq = ∅,

ð12:13Þ

otherwise, the two alternatives Ap and Aq are indistinguishable. Thus, for the distinguishability of alternatives, it is required to estimate the error of the performance indicator of alternatives. According to the theory of errors, the absolute (and relative) error of the indirectly measured quantity Y is determined from the rules for differentiating the function Y of many variables x1, x2,. . ., xn. For the first approximation: n

ΔY = j=1

∂Y  Δxj : xj

ð12:14Þ

For the case of an additive attribute aggregation function of alternatives Qi = Σwjrij , the partial derivatives with respect to the variables rij are equal to wj. Let’s assume that the weights of the attributes are the same. Then we get: n

Qi =

n

r ij = j=1 n

± j=1

j=1

r ij ±

Δaij = kj

n j=1

r ij ± Δr ij =

n j=1

r ij

Δr ij = Qi ± ΔQi ,

ð12:15Þ

the absolute error of the performance indicator is estimated as: n

ΔQi =

Δr ij : j=1

The relative error of the result δQ is determined by the formula:

ð12:16Þ

12.5

Evaluation of the Distinguishability of the Rating When the. . .

ΔQi δQi = = Qi

ωj  Δr ij j

ωj  r ij

=

Δr ij , r ij

255

ð12:17Þ

j

In accordance with Eq. (12.17), the relative error of the performance indicator of the ith alternative is the ratio of the weighted average error of normalized attribute values to the weighted average normalized value. Given that the compression ratios kj increase in a number of normalization methods Max-Min, Max, Vec, Sum, the value of Σrij takes the largest value (equal to 1) for the Sum method. Therefore, the estimation of the value of the error of the performance indicator of alternatives (Eq. 12.17) is the smallest among the above linear methods of normalization. In a particular case, when the weights of the criteria are the same and the Sum method of normalization: n

δQi =

δ r ij ,

ð12:18Þ

j=1

where δ rij is the average relative error of normalized values of attributes. Thus, the ranking resolution is determined by the error in estimating the natural values of the attributes of alternatives and depends on the normalization method through the scaling factor kj. Further, it is assumed that the error estimate δij = δ(aij) is known (given), and either the average over many observations or, in the absence of statistics, its estimate is taken as the true value of the attribute. Taking this into account, we will take this value as the base (initial) value and denote it as aij0. A special case is the situation when for a fixed criterion, under the conditions of the same data source, the error in the evaluation of alternatives does not depend on the alternatives, i.e. δij = δj. Condition: δij = δj for all i = 1, . . ., m, excludes the priority of alternatives before making decisions. It is clear that this requirement is mandatory when evaluating alternatives. An overestimation (underestimation) of any alternative for any of the attributes entails its priority. To conduct a statistical analysis of the sensitivity of the ranking of alternatives to the assessment of attributes for various normalization methods, it is necessary to obtain a representative statistical sample of size N for the performance indicators of alternatives Qi with variations in the decision matrix aij in the range of acceptable values: aij ðkÞ 2 aoij  1 - δij ; aoij  1 þ δij , k = 1, . . . , N:

ð12:19Þ

The variation of the decision matrix is performed using a random number generator uniformly distributed on the interval [0, 1]. The variation algorithm has the following simple formula:

256

12

Significant Difference of the Performance Indicator of Alternatives

aij ðkÞ = aij o  1 ± δij  rndk ðÞ ,

ð12:20Þ

where rndk() is a function that returns a random number uniformly distributed over the interval [0, 1] at the kth iteration step. For each variation of the decision matrix aij(k), we determine the value of the performance indicator of each alternative Qi(k), i = 1,. . ., m. Further, based on the statistics Qi(k), a statistical assessment of the standard error ΔQi of the performance indicator of alternatives is carried out. Thus, sensitivity analysis of ranking to attribute variation has the following compact form: ΔQi = Φ δij aij :

ð12:21Þ

12.6 Statistics of the Performance Indicator of Alternatives When Varying the Decision Matrix 12.6.1

Statistical Experiment

The statistical experiment is performed according to the following scheme: 1. Set the base value of the decision matrix aij(0), 2. Set the matrix of relative errors δij for the matrix aij, 3. Perform a variation of the decision matrix: aij ðkÞ = aij o  1 ± δij  rndk ðÞ , k = 1, . . . , N, 4. For each attribute aggregation method: {SAW, CODAS, MABAC, COPRAS, VIKOR, TOPSIS(L1, L2, L1), GRA, GRAt, PROMETHEE-II, ORESTE-1}, 5. For each of the normalization methods: {Max, Sum, Vec, dSum, IZ(Max), IZ (Sum), IZ(Vec), IZ(dSum), M-M, Z[0,1], mIQR[0, 1], mMAD[0,1], MS(Max), MS(Sum), MS(Vec,4), MS(dSum), PwL[0,1], SSp[0,1], Sgm[0, 1], Sgm(Z), Sgm(IQR)}, 6. Determine N values of the performance indicator of each alternative (i = 1,. . ., m): Qi(k), corresponding to the kth variation of the decision matrix aij(k). For each Qi(k) we calculate the values of relative indicators: dQi(k). 7. Calculate the following standard statistics on N values: mean value : Qi m = meanðQi ðkÞ Þ, standard deviation : σi = stdðQi ðkÞ Þ:

12.6

Statistics of the Performance Indicator of Alternatives When Varying. . .

257

8. We check the hypothesis about the normal distribution of the random variable Qi(k): Kolmogorov-Smirnov, Jarque-Bera, and Lilliefors tests [16–18].

Notes: (a) for a statistical experiment using PROMETHEE-II, ORESTE-1 aggregation methods, normalization is not required, (b) for a statistical experiment using the VIKOR aggregation method, variations of linear normalization methods are not required. The VIKOR method uses a homogeneous function to aggregate the attributes of alternatives, so the value of the performance indicator does not change when the attributes are linearly transformed (invariant property of linear transformations, Chap. 4), (c) The TOPSIS aggregation method is implemented in several variants of different distance metrics. Three variants of metrics are relevant for analysis: Taxi Cab (City Block) (L1), Euclidean (L2), and Chebyshev distance (L1), (d) For all attribute aggregation methods and for all normalization methods, the inversion of normalized values of cost criteria is performed using the ReS algorithm (Chap. 5), (e) The weights of all criteria in the statistical experiment are assumed to be the same in order to eliminate the priority of individual attributes. Statistical analysis is performed to evaluate the influence of decision matrix variation on the result of ranking of alternatives, (f) For the SAW method, in the case of equal weights of the criteria, the central limit theorem is true, according to which the distribution of the sum of a large number of random variables is asymptotically normal. Since all methods for aggregating the attributes of alternatives use the summation of attributes in various forms, therefore, in the statistical analysis section, it is advisable to check the hypothesis of the normal distribution law, (g) Why do we need knowledge about the law of distribution of the performance indicator of alternatives? What knowledge can be extracted from the normality of the distribution of a random variable? If the attribute estimates are not accurate, it is necessary to establish a criterion for distinguishing alternatives that are close in terms of the values of the efficiency indicators. It is difficult to estimate the error and a priori estimates are often used. One way is statistical. The goal is achieved only for a limited number of distributions, one of which is the normal distribution law, for which interval estimates for the mean and criteria for a significant difference in the means of two samples from a normal population (t-test) are known.

258

12.6.2

12

Significant Difference of the Performance Indicator of Alternatives

Distribution of the Performance Indicator of Alternatives

The main aggregation methods use attribute summation in various forms. In accordance with Lyapunov’s central limit theorem (in an exaggerated form), the sum of a large number of random variables, under some additional restrictions, asymptotically tends to the normal distribution law. Therefore, we put forward a hypothesis about the normal distribution of the random variable Qi(k). Figures 12.2 and 12.3 show typical histograms of distributions of the efficiency indicator (Q) of alternatives of I– III ranks for two aggregation methods SAW and GRAt. The samples were obtained on the basis of a statistical experiment for different values of the relative error δij of the attributes of alternatives aij. Similar results take place for all used aggregation methods. Additionally, in Figs. 12.2 and 12.3, in the upper part of each of the histograms, there are point estimates of unknown distribution parameters (μ; σ)—the mean value

Fig. 12.2 Histograms of the values of the performance indicator of alternatives (Q) of I–III ranks for various error values of the attributes (δ) of the alternatives aij. SAW method. 1024 variations of the decision matrix

12.6

Statistics of the Performance Indicator of Alternatives When Varying. . .

259

Fig. 12.3 Histograms of the values of the performance indicator of alternatives (Q) of I–III ranks for various error values of the attributes (δ) of the alternatives aij. GRAt-method. 1024 variations of the decision matrix

and standard deviation, as well as the logical values of three tests of normal distribution. These are the Jarque-Bera, Lilliefors, and Kolmogorov-Smirnov tests. The null hypothesis is that the sample, Q, is a sample from a normal distribution with unknown mean and variance. The alternative hypothesis is that the sample is not from a normal distribution. A triple (JB-Lf-KS) is represented by a set of 0s and 1s for each of the tests. The tests are implemented using the jbtest(), lillietest(), and kstest() functions in MatLab. The functions return Boolean 1 if it rejects the null hypothesis at a 5% significance level, and 0 if there is no reason to reject the null hypothesis. For example, a triple of numbers (010) means that the second test (Lillifors) rejects the null hypothesis at a 5% significance level. The use of three different tests increases the reliability of accepting or rejecting the hypothesis if the result is a triple (000) or (111). For other combinations like (110) or (001), etc., it is necessary to understand the reason for rejecting the null hypothesis. First, for each logical value 1 (hypothesis rejection), it is necessary to calculate the value of the significance level of the corresponding tests, at which there is no

260

12

Significant Difference of the Performance Indicator of Alternatives

reason to reject the null hypothesis. If the significance level is not more than 10%, the statistical test should be repeated on new samples of different sizes. Further, the decision is made by the decision maker. In a situation where at least one of the tests of normality does not reject the null hypothesis, there are grounds for further correct estimation of the error in the performance indicator of alternatives. Numerical analysis shows that the distribution of Q statistics for the SAW, CODAS, MABAC, COPRAS, TOPSIS(L1, L2), PROMETHEE-II methods is described by the normal distribution law. Aggregation methods VIKOR, D’Ideal, ORESTE-1 (L1, L2) are not resistant to changes in the initial data—multimodality, distribution asymmetry, and a significant deviation from the normal distribution law are observed. For the VIKOR, TOPSIS-Inf, GRA methods, the performance indicators of alternatives of the first three ranks deviate from normality due to strong asymmetry. For some attributes, the performance indicators of alternatives of the first three ranks deviate from normality due to strong asymmetry.

12.7

Ranking Alternatives Based on Simple Comparison of the Rating

Within the framework of the statistical experiment described in the previous section, at each step of the statistical experiment, we perform the ranking of alternatives based on a simple (point) comparison of performance indicators Qi(k). Due to the variation of the decision matrix, it is obvious that the ranks of the alternatives change. This change is private and is performed with a certain combination of attributes, which is a random value. We do not know in advance whether the implementation of such a combination of attributes is possible, but the probability of such an event is not small. As a result of such a statistical experiment, we will obtain estimates for the proportion of the total number of situations (frequency) in which one or another alternative will have I, II, or III rank. This information fills in the uncertainty in the estimate of attributes of alternatives and will be very useful to the decision maker. Figures 12.4 and 12.5 present the results of ranking of alternatives based on a simple comparison of the performance of alternatives with a variation of the decision matrix for various aggregation methods of the attributes of alternatives. Additionally, for each alternative, the distribution of the efficiency indicator Qi of alternatives with variations in the decision matrix (δ*, j = 2%, N = 1024), as well as the average value of the alternative efficiency indicator and the interval Qim±σ i (highlighted in red in the figures) is given. An analysis of the significant overlap of intervals Qim±σ i for alternatives of higher ranks for a number of aggregation methods allows us to conclude that the aggregation method is sensitive to the variation of the decision matrix. The ranking of alternatives based on a simple comparison of the performance indicators of alternatives in such situations is not enough. It is necessary to use interval estimates and establish an interval measure of the distinguishability of

12.7

Ranking Alternatives Based on Simple Comparison of the Rating

261

Fig. 12.4 Ranking of alternatives based on a simple comparison of the rating of alternatives while varying the decision matrix for different aggregation methods. Distribution of the performance indicator of alternatives with variations in the decision matrix (δ*, j = 2%, N = 1024). Fraction 1

alternatives. The proximity to the normal distribution for Qi(k) (see Sect. 12.6) allows the use of 1σ, 2σ, and 3σ intervals to evaluate the distinguishability of alternatives based on interval comparison. Note: For the methods of attribute aggregation of the alternatives VIKOR and ORESTE-1, the best alternative is the one with the lowest value of the performance indicator Q. To preserve the generality of the visual representation in Figs. 12.4 and 12.5, the ReS-algorithm for transforming Q values was used for the VIKOR and ORESTE-1 aggregation methods in the form Qi = –Qi+Qimax+Qimin. With such a

262

12

Significant Difference of the Performance Indicator of Alternatives

Fig. 12.5 Ranking of alternatives based on a simple comparison of the rating of alternatives while varying the decision matrix for different aggregation methods. Distribution of the performance indicator of alternatives with variations in the decision matrix (δ*, j = 2%, N = 1024). Fraction 2

transformation, the values are inverted and the range of change and disposition of Qi values are preserved. Taking into account the multi-variance, additionally in Figs. 12.4 and 12.5, for each aggregation methods of the attributes of alternatives, tables of relative frequencies of I–III ranks of three (in descending frequency) alternatives are given with variations in the decision matrix. Some of the aggregation methods shown in the

12.8

Evaluation of the Criterion Value of the Performance Indicator Based. . .

263

examples (Figs. 12.4 and 12.5) (TOPSIS, COPRAS) uniquely rank alternatives based on a simple comparison of performance measures. For other aggregation methods, the result is multivariate. Several alternatives may have I (II, III) rank. When changing (increasing) the values of variation for the decision matrix δ*, j, the results may change due to an increase in the estimate of the standard deviation of the efficiency indicator σ(Q). Visual perception of graphic ranking results in Figs. 12.4 and 12.5 can be misleading, because for different aggregation methods, the range of change in the values of the performance indicator of alternatives Q along the abscissa is different. For example, the COPRAS method may appear to be more efficient than the TOPSIS aggregation method because the histograms have less kurtosis and hence the efficiency measure is more distinct. However, this is not the case and is due to the different scale of values along the X-axis for different aggregation methods. To eliminate the perception error, it is easy to bring all diagrams to the dimensionless scale [0, 1] by simply normalizing the Qi values: Qi =

Qi - Qmin i : Qmax - Qmin i i

ð12:22Þ

The corresponding distributions of the performance indicator of alternatives are shown in Figs. 12.6 and 12.7. Now, based on the visual perception of the graphical ranking results, preference should be given to the TOPSIS aggregation method, for which the histograms of the distribution of Q values have a smaller kurtosis.

12.8

Evaluation of the Criterion Value of the Performance Indicator Based on the Error in the Evaluation of the Decision Matrix

For various MCDM models (aggregation methods in combination with normalization methods, the choice of distance metrics, etc.), analytical dependences of the form (12.15), (12.18), (12.21) cannot be established. However, the numerical analysis of the dependence of ΔQi on δ(aij) based on the results of a statistical experiment is quite simple to perform. Figure 12.8 shows the dynamics of the interval of distinguishability of performance indicators for alternatives of I–III ranks for four different methods of aggregating with variations in the decision matrix. There is an obvious increase in the width of the interval of distinguishability mean (Qi)±std(Qi). Similar results hold for all aggregation methods. It is important that for different methods of aggregation the visibility of the efficiency indicator is different.

264

12

Significant Difference of the Performance Indicator of Alternatives

Fig. 12.6 Ranking of alternatives based on a simple comparison of the rating of alternatives while varying the decision matrix for different aggregation methods. Distribution of the performance indicator of alternatives with variations in the decision matrix (δ*, j = 2%, N = 1024). Normalized values of the performance indicator of alternatives. Fraction 1

So, according to Fig. 12.8 for the SAW method, the alternatives of the first three ranks are indistinguishable, and for the TOPSIS method under the same conditions of the statistical numerical experiment, the first and second alternatives differ significantly up to δ ≈ 10%. For the COPRAS, TOPSIS, and VIKOR aggregation methods, the difference interval width is minimal, which characterizes these methods as methods with the best resolution of alternatives.

12.8

Evaluation of the Criterion Value of the Performance Indicator Based. . .

265

Fig. 12.7 Ranking of alternatives based on a simple comparison of the rating of alternatives while varying the decision matrix for different aggregation methods. Distribution of the performance indicator of alternatives with variations in the decision matrix (δ*, j = 2%, N = 1024). Normalized values of the performance indicator of alternatives. Fraction 2

Figures 12.9 and 12.10 show the dynamics of the relative error of the efficiency indicator for alternatives of I–III ranks for various methods of aggregating the attributes of alternatives depending on the values of the relative error of the decision matrix values. The green line shows the “equilibrium” line for the function Φ (12.21), on which the error values in the estimates of the decision matrix δ(aij) and the error values resulting from δQi coincide. Therefore, all methods in which the lines Φ[δ(aij)] are located below the equilibrium are preferable and have good resolution.

266

12

Significant Difference of the Performance Indicator of Alternatives

Fig. 12.8 Changing the interval of distinguishability of performance indicators for alternatives of I–III ranks for various aggregating methods with variations in the decision matrix δ. N = 1024

In accordance with the results in Figs. 12.7–12.10, SAW, TOPSIS, COPRAS, VIKOR, and ORESTE methods are preferred. The following two Figs. 12.11 and 12.12 show a graphical illustration of which alternatives of the first three ranks are indistinguishable, for different values of the error in the decision matrix. Five consecutive vertical lines correspond to the range of values mean(Qi) ± std (Qi) and correspond to the array of error values δ(aij) = {2, 5, 8, 12, 15} %. The following color palette is used: black—alternatives are distinguishable (1 group according to Sect. 12.2); blue—two alternatives have rank I and are indistinguishable (group 2); blue—two alternatives have rank II and are indistinguishable (group 3); magenta—three alternatives have rank I and are indistinguishable (group 4); brown—two alternatives have rank I and are indistinguishable, and two alternatives have rank II and are indistinguishable (group 5). For example, according to

12.8

Evaluation of the Criterion Value of the Performance Indicator Based. . .

267

Fig. 12.9 Change in the relative error of the performance indicator for alternatives of I–III ranks for various aggregating methods, depending on the relative error of the values of the decision matrix. N = 1024. Fraction 1

Fig. 12.11, for the SAW aggregation method with δ(aij) = 5%, two alternatives A2 and A4 have rank I and are indistinguishable (group 2, according to Sect. 12.2). Similarly, at δ(aij) = 8% (12 or 15) two alternatives A2 and A4 have rank I and are indistinguishable, and alternatives A4 and A6 have rank II and are indistinguishable (group 5). The graphical illustration 12.11 and 12.12 of the priority of alternatives is an important additional information for the decision maker. One of the important results of the statistical experiment is to determine the distinguishability index of alternatives of I–III ranks. For each variation of the decision matrix, the result of the point ranking and the distinguishability of alternatives of ranks I–III belong to one of the five groups defined above in Sect. 12.3. In this way, we determine the indicator of distinguishability of alternatives as a share (in percent) of different groups (group percentage GP). The GP score depends both on the aggregation methods of the attributes of alternatives Agg = {SAW, TOPSIS,

268

12

Significant Difference of the Performance Indicator of Alternatives

Fig. 12.10 Change in the relative error of the performance indicator for alternatives of I–III ranks for various aggregating methods, depending on the relative error of the values of the decision matrix. N = 1024. Fraction 2

VIKOR. . .} and on the used normalization method Norm = {Max, Sum, Vec, . . .}. The first group is the group in which all alternatives are distinguishable. The higher the GP of this group for a particular pair (model) “Agg–Norm,” the higher the resolution of the model. The second and fourth groups, on the contrary, determine the degree of indistinguishability of alternatives of I–III ranks. The smaller the total proportion of these groups for a specific “Agg–Norm” model, the lower the resolution of the model. For example, the value of GP for compromise solutions of the first three ranks is shown in the histograms Fig. 12.1. This section presents the results of a numerical analysis of the efficiency of the “Agg–Norm” model based on a statistical experiment with variations in the decision matrix (Tables 12.2, 12.3 and 12.4).

12.8

Evaluation of the Criterion Value of the Performance Indicator Based. . .

269

Fig. 12.11 Distinguishability and priorities of alternatives of I–III ranks for various aggregation methods depending on variations in the decision matrix. N = 1024. Fraction 1

The data in Tables 12.2 and 12.4 differ in the method of estimating the value of dQ—the error in the performance indicator of alternatives. For a statistical experiment, the results of which are presented in Table 12.2, dQ = 5%—given a’priori based on expert judgment (fixed value for all alternatives). For a statistical experiment, the results of which are presented in Table 12.3, dQi is determined based on the standard deviation statistics as a function of δ(aij): ΔQi = Φ[δ(aij)]. Frequency analysis for the case when the error is determined expertly and fixed (Table 12.2) allows us to conclude that for the SAW and TOPSIS aggregation method, the Sum and Vec normalization methods have the best distinguishability among the linear normalization methods (the highest frequency in the first group, and the lowest in the second). Similarly, for the COPRAS and GRAt aggregation method, the Max-Min, dSum, IZ, and MS-normalization methods have the best distinguishability among the linear normalization methods. The corresponding GP indicators of group 1 and the total for groups 2 and 4 in the table are highlighted. For non-linear normalization methods, the normalization methods PwL, SSp, Sgm (Chap. 9) have the best distinguishability, since these normalization methods “strengthen strong” alternatives and “weaken weak” alternatives.

270

12

Significant Difference of the Performance Indicator of Alternatives

Fig. 12.12 Distinguishability and priorities of alternatives of I–III ranks for various aggregation methods depending on variations in the decision matrix. N = 1024. Fraction 2 Table 12.2 Distinguishability of alternatives of I–III ranks with variation of the decision matrix for different models “Agg–Norm.” 1024 variations. dQ = 5%

Max Sum Vec MM dSum IZ MS PwL SSp Sgm

SAW 54.3 53.5 54.8 42 42.4 42.4 38.4 39.1 56.7 49.2

GP, % COPRAS TOPSIS 57.5 56.7 62.3 99.4 61.1 98.8 43.1 35.1 58.2 59.9 60 34.3 64.4 30.7 40.1 34.8 44.9 47.1 63.4 36.6

GRAt 57.3 54 55.9 60.1 58.4 61.2 59.6 56 63.3 55.3

GP(I ≈ II or I ≈ II ≈ III), % SAW COPRAS TOPSIS GRAt 23.6 33.9 0.6 25.1 1.9 15.2 0.1 29.7 1.8 17.6 0 28.6 30.4 1.3 34.7 19.8 27.5 3.8 9.1 18.2 30.1 0.2 33 20.4 31.9 2.4 36.1 19.6 29.5 0.6 32.1 20.7 25.4 0.5 29.8 14.8 20.5 0.5 23.6 14.1

12.8

Evaluation of the Criterion Value of the Performance Indicator Based. . .

271

Table 12.3 Distinguishability of alternatives of I–III ranks with variation of the decision matrix for different models “Agg–Norm.” 1024 variations. dQi = Φ[δ(aij)]

Max Sum Vec MM dSum IZ MS Pwl SSp Sgm

SAW 3 0 0 35.4 0 15.1 6.3 32.4 54.8 47.8

GP, % COPRAS TOPSIS 5.6 13.6 94.1 0 89.7 0 64.4 1.7 4.6 15.8 46.4 2 35.4 1.1 64.3 0.9 90.6 8.1 88.8 2.1

GRAt 3.7 0.6 1 22.2 15.4 21.2 18 17.4 26.8 16.5

GP(I ≈ II or I ≈ II ≈ III), % SAW COPRAS TOPSIS GRAt 77.9 83.9 4.3 82.4 100 100 0.6 93.7 89.7 98.9 0.4 93.8 40 1.2 77.3 51.9 97.3 27 25.3 55.5 59.6 1 77.1 54.2 76 14.2 82.7 56.4 39.1 0.6 73.5 54.1 28.7 0.1 62.7 41.9 25.1 0.4 55.7 44.9

An analysis of the distinguishability index GP for the case when the error is determined on the basis of standard deviation statistics (Table 12.3) leads to somewhat different conclusions. This is due to the fact that for the SAW, TOPSIS, and COPRAS aggregation methods, the error dQ becomes somewhat lower a’priori of the set value of 5%, and for the GRAt-method it is higher (see Figs. 12.8 and 12.9). For the SAW and COPRAS aggregation methods, the best distinguishability of rating for linear normalization methods takes place for the Max-Min method (the highest value of the GP distinction index between the first and second rating). For the COPRAS aggregation method, the IZ-method also demonstrates good distinguishability. As for the results of Table 12.3, for the TOPSIS aggregation method, the Sum and Vec normalization methods have the best distinguishability among the linear normalization methods. For the GRAt aggregation method, the IZ and MS-normalization methods have the best distinguishability among the linear normalization methods. For non-linear normalization methods, the best distinguishability for normalization methods is PwL, SSp, Sgm. Corresponding rows and columns in the table are highlighted. The following Table 12.4 presents the results of the frequency dynamics at various errors for the variation of the decision matrix. Increasing the error δ(aij) in estimating the decision matrix leads to a decrease in the degree of distinguishability for all methods of aggregating the attributes of alternatives. The numerical analysis described above provides the researcher with the opportunity to select the “Agg–Norm” model, to improve the degree of distinguishability of the ranking of alternatives.

272

12

Significant Difference of the Performance Indicator of Alternatives

Table 12.4 Change in the index GP of the distinguishability of alternatives at different errors for the variation of the decision matrix. Linear methods of normalization. 1024 variations. dQi = Φ[δ(aij)] δ, % Max SAW COPRAS TOPSIS GRAt

GP, % 2 61.9 33.9 34.7 29.5

5

8

12

15

GP(I ≈ II or I ≈ II ≈ III), % 2 5 8 12

15

2.8 5.7 12.5 4.7

0.6 1.9 4.7 1.2

0.1 0.8 2.2 0.5

0 0.1 0.6 0.5

18.6 66.1 0 64.4

76.6 86 3.2 82

85.7 87.3 31.2 85

89.7 89.4 58.4 85.8

93.5 90.9 73.1 86.5

Sum SAW COPRAS TOPSIS GRAt

0 0 100 8

0 0 93.9 1

0 0 53.2 0.2

0 0 19.4 0.1

0 0 8.1 0.1

100 100 0 90.9

100 100 0.3 93.6

100 100 13.9 93.9

100 100 31.1 92

100 100 43.6 93.6

Vec SAW COPRAS TOPSIS GRAt

0 9 100 9.2

0 0 90.3 0.9

0 0 46.3 0.1

0 0 14.4 0.2

0 0 5.7 0

0.1 62 0 89.6

89.7 99 0.3 92

98.7 99.9 10.5 91

99.5 99.8 34.4 92.8

99.8 100 42.7 92.1

Max-Min SAW COPRAS TOPSIS GRAt

46.2 75.1 3.9 46.3

35.5 66.8 1.4 21.7

22.9 58 1 13.1

13.3 48.3 0 6.5

12.1 37.5 0.1 3.6

35.5 0 69.7 43.1

37.5 2.1 75.7 52.7

45.8 8.7 78.6 57.1

52.4 16.2 83.6 66.7

56.8 20.5 87.1 69

dSum SAW COPRAS TOPSIS GRAt

0.2 43.8 50.9 50.4

0 5.1 16.1 18.4

0 1.9 9 8.1

0 0.2 2.2 1.8

0 0 0.8 2.6

85.4 0 5.3 32.1

98.4 27.7 25.2 51.6

98.3 56.3 45.5 63.7

99.4 82.5 64.6 74.9

99.9 91.6 72.9 75.2

IZ(Max) SAW COPRAS TOPSIS GRAt

26 63 4 46.4

13.3 46.5 1.3 21.9

7.3 31.3 0.7 13.2

3.3 15.5 0.3 6.9

2.1 8.3 0.2 4

50.8 0 70.8 40.6

60.6 1.1 76.2 51.6

64.2 13.7 80.5 57.3

70.6 39.6 82.2 64.5

77.3 52.7 85.1 68.5

MS(Z) SAW COPRAS TOPSIS GRAt

14.3 68.9 4.8 39.9

4.3 33.4 1.9 16.7

2.1 15.8 0.7 8.6

0.8 7 0.4 4.6

0.3 4.1 0.2 2.4

71.3 0.1 63 49.1

78.9 14.6 81.1 56.6

79.3 37.5 81.8 64.6

83.8 58.6 83 68.8

84.9 69.5 86.2 74.8

References

12.9

273

Conclusions

The use of the dQ indicator to evaluate the significance of the difference between alternatives in ranking is a simple and convenient decision tool and can be used for any method of alternative attribute aggregation. The correctness of the assessment of the significance of the difference between alternatives in ranking is determined by the value of the allowable error dQc of the assessment of the attributes of the alternatives. It is clear that the presented results relate to a specific problem and have a relative generality. However, it is not the specific results that are important, but the methodology in determining the degree of distinguishability of the ranking of alternatives based on the implementation of a numerical statistical experiment in the following form: • • • • • • • •

define a set of aggregation methods of attributes, define a set of normalization methods, estimate the error of “measurement” of attributes, varying the decision matrix within the error of the “measurement” of the attributes and determine the numerical rating of alternatives, determine the numerical rating of alternatives and evaluate the error of the performance indicator on a set of ratings, establish the distinguishability of alternatives of I–III ranks, ranking alternatives, choose the best model according to the degree of distinguishability of the ranking of alternatives.

References 1. Mukhametzyanov, I., & Pamučar, D. (2018). Sensitivity analysis in MCDM problems: A statistical approach. Decision Making: Applications in Management and Engineering, 1(2), 51–80. https://doi.org/10.31181/dmame1802050m 2. Hwang, C. L., & Yoon, K. (1981). Multiple attributes decision making: Methods and applications. A state-of-the-art survey. Springer. 3. Chakraborty, S., & Zavadskas, E. K. (2014). Applications of WASPAS method as a multicriteria decision-making tool. Informatica, 25(1), 1–20. https://doi.org/10.15388/Informatica. 2014.01 4. Pamučar, D., & Ćirović, G. (2015). The selection of transport and handling resources in logistics centres using Multi-Attributive Border Approximation area Comparison (MABAC). Expert Systems with Applications, 42, 3016–3028. 5. Ghorabaee, K. M., Zavadskas, E. K., Turskis, Z., & Antucheviciene, J. (2016). A new COmbinative Distance-based ASsessment (CODAS) method for Multi Criteria DecisionMaking. Economic Computation & Economic Cybernetics Studies & Research, 50(3), 25–44. 6. Ustinovichius, L., Zavadskas, E. K., & Podvezko, V. (2007). Application of a quantitative multiple criteria decision making (MCDM-1) approach to the analysis of investments in construction. Control and Cybernetics, 36(1), 251–268.

274

12

Significant Difference of the Performance Indicator of Alternatives

7. Archana, M., & Sujatha, V. (2012). Application of fuzzy MOORA and GRA in Multi-criterion Decision Making problems. International Journal of Computer Applications, 53(9), 46–50. 8. Wang, Q. B., & Peng, A. H. (2010). Developing MCDM approach based on GRA and TOPSIS. In Applied Mechanics and Materials (Vols. 34–35, pp. 1931–1935). Trans Tech Publications. https://doi.org/10.4028/www.scientific.net/amm.34-35.1931 9. Opricovic, S. (1998). Multicriteria optimization of civil engineering systems. PhD thesis, Faculty of Civil Engineering, Belgrade, 2(1), 5–21. 10. Brans, J. P., Mareschal, B., & Vincke, P. (1986). How to select and how to rank projects: The PROMETHEE method. European Journal of Operational Research, 24(2), 228–238. 11. Pastijn, H., & Leysen, J. (1989). Constructing an outranking relation with ORESTE. Mathematical and Computer Modelling, 12, 1255–1268. 12. Wang, X. D., Gou, X. J., & Xu, Z. S. (2020). Assessment of traffic congestion with ORESTE method under double hierarchy hesitant fuzzy linguistic environment. Applied Soft Computing, 86, 105864. 13. Mukhametzyanov, I. Z. (2023). Elimination of the domain’s displacement of the normalized values in MCDM tasks: The IZ-method. International Journal of Information Technology and Decision Making. https://doi.org/10.1142/S0219622023500037 14. Mukhametzyanov, I. Z. (2023). On the conformity of scales of multidimensional normalization: An application for the problems of decision making. Decision Making: Applications in Management and Engineering. https://doi.org/10.31181/dmame05012023i 15. Mukhametzyanov, I. Z. (2020). ReS-algorithm for converting normalized values of cost criteria into benefit criteria in MCDM tasks. International Journal of Information Technology and Decision Making, 19(5), 1389–1423. https://doi.org/10.1142/S0219622020500327 16. Smirnov test. (2022, May 28). In Wikipedia. https://en.wikipedia.org/wiki/Kolmogorov%E2% 80%93Smirnov_test 17. Lilliefors test. (2022, May 28). In Wikipedia. https://en.wikipedia.org/wiki/Lilliefors_test 18. Bera test. (2022, May 28). In Wikipedia. https://en.wikipedia.org/wiki/Jarque%E2%80%93 Bera_test

Conclusion

This book gives a lot of information about what difficulties can arise when normalizing a decision matrix, what to expect from such problems, and how to solve these problems. The main estimates were obtained in relation to the weighted sum method. In the scientific community of researchers, there is a dual understanding of normalization methods and rank methods of MCDM. On the one hand, the normalization procedure is an integral part of the MCDM method; integrated into the method. On the other hand, the normalization procedure is understood as a part of MCDM independent of the aggregation method, which leads to an extension of the list of methods, for example, TOPSIS(Vec)—with Euclidean normalization or TOPSIS(Sum)—intensity. The author’s point of view is as follows: the MCDM rank model includes a normalization method, a distance metric, a method of criterion weight estimation, and an alternative attribute aggregation method. Different combinations of these components (different models or different designs) lead to different ranking results. As this study shows, the implications of the choice of normalization method for rank-based MCDM methods can be significant. The result (rank of alternatives) depends on the normalization method and is largely determined by the particular priorities of the attributes. The invariant properties of linear normalization methods formulated by the author indicate how to eliminate simple problems and avoid obvious errors when choosing a normalization method when solving MCDM problems. Quite obvious requirements for preserving the information content of data after normalization and elimination of the priority of individual criteria formed the basis of the multi-step normalization technique in the form of three new normalization methods. The ReS-algorithm is relevant for any normalization methods when it is necessary to perform an inversion in the optimization direction. IZ-method that allows you to align the areas of normalized values for all criteria. The MS-method allows you to transform Z-score data into an interval consistent for all attributes, while maintaining © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3

275

276

Conclusion

the average values (median) and variance of all criteria. All three methods make it possible to eliminate the priority of the contributions of individual criteria to the indicator of the effectiveness of alternatives. The variety of normalization methods and the absence of a selection criterion put researchers in a difficult position. The author’s recommendations are as follows: – when there is no asymmetry in the data, it is advisable to use the two-step procedure IZ(Max,4) with the alignment of the lower boundary along the median. Max-normalization has a clear interpretation, as a fraction of the “best” value, which is necessary for the correct aggregation of attributes. – for an attribute with asymmetry in the data, use Z-normalization at the first step, and then the logistic transformation Sgm(Z). Next, equalize the boundaries of all attributes using IZ-transformation. This will reduce the effect of asymmetry on the result. – for non-numerical data on individual attributes, normalize using the desirability scale and the desirability function. Next, equalize the boundaries of all attributes using IZ-transformation.

Appendix: Program Code “Normalization of Multidimensional Data” for MatLab System

% Main: MCDM_Norm.m % % ---------------- Normalization of decision matrix DM ------------------% % Author: Dr. Irik Z. Mukhametzyanov % Date 05 May 2021 % Update 21 January 2023 % % --- Ufa State Petroleum Technological University --% % INPUT ARGUMENTS: % % D - [mxn] Decision Making Matrix % m-alternatives, n-criteria (attributes) % w - the weight of attributes determined by decision-maker [1xn] % MM - [1xn] vector criteriaSign; = 1 for benefit (revenue) attributes; % =-1 for cost attributes (expenditure) %-----------------------------------------------------------------------– %-- input data D= [6500 85 667 140 1750 5800 83 564 145 2680 4500 71 478 150 1056 5600 76 620 135 1230 4200 74 448 160 1480 5900 80 610 163 1650 4500 71 478 150 1056 6000 81 580 178 2065]; [m,n]=size(D); MM=[1 1 -1 1 -1]; % +1 LTB (Max), -1 STB (Min), 0-Target © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Z. Mukhametzyanov, Normalization of Multidimensional Data for Multi-Criteria Decision Making Problems, International Series in Operations Research & Management Science 348, https://doi.org/10.1007/978-3-031-33837-3

277

278

Appendix: Program Code “Normalization of Multidimensional Data” for MatLab System

% aT=[0 0 615 0 0]; %-- target value of attributes % Pr=[0 0 1 0 0]; % If Pr(j)=1, then j is a Target-based Criteria (Max) larger-is-better %-- normalization methods: cNorm={'Max', 'Sum', 'Vec', 'dSum', ... 'M-M', 'Z[0,1]', 'mIQR[0,1]', 'mMAD[0,1]', ... 'PwL[0,1]', 'SSp[0,1]', 'Sgm[0,1]', 'Sgm(Z)', 'Sgm(IQR)'... 'IZ(Max,4)', 'IZ(Sum,4)', 'IZ(Vec,4)', 'IZ(dSum,4)',... 'MS(Max,4)', 'MS(Sum,4)', 'MS(Vec,4)', 'MS(dSum,4)'}; kNorm=size(cNorm,2); pk= [1 2 3 4 ... 5 6 7 9 ... 12 13 14 15 16 ... 21 22 23 24 ... 31 32 33 34]; % number of method D1=Fun_ReS(D, MM); %-- Invers before Norm() %-----------------------------------------------------------------------– % Normalization of decision matrix DM for ik=1:kNorm par=pk(ik); MM1=MM; %-- Linear if par11 & par20 & par30 & par4 & par11 & par transform Z-score to [0,1]

%-- non linear normalization methods if par==10 % Log lnD=log(D); V=lnD./sum(lnD); end if par==11 % Max2 V=(D./repmat(t2,m,1)) .^2; end iPrint=0; if iPrint==1 figure jT=2; plot(D(:,jT),V(:,jT),'ob') [x1 j1]=min(D(:,jT)); [x2 j2]=max(D(:,jT)); y1=V(j1,jT); y2=V(j2,jT); line([x1 x2],[y1 y2],'LineStyle','','LineWidth',0.5,'Color','g') end %---------------– Target-based normalization (linear) ---------------% t-Norm (t-Max, t-Sum,... switch nargin case 4 for j=1:n rt(j)=(T(j)-a(j))./k(j); %-- normalized target value: r(T) if par==9 % rt(j) transform MS to [0,1] rt(j)=rt(j)-Vmin; rt(j)=rt(j)/Vmax1; end rmax=max(V(:,j)); rmin=min(V(:,j)); if Pr(j) == 1 IZ1= min(V(:,j)); IZ2= max(V(:,j)); for i=1:m if D(i,j) > T(j)

285

286

Appendix: Program Code “Normalization of Multidimensional Data” for MatLab System % if par==9 V(i,j)=2*rt(j)-V(i,j); % else % V(i,j)=(2*T(j)-D(i,j)-a(j))/k(j); % end end end if iPrint==1 hold on plot(D(:,jT),V(:,jT),'ok') [x1 j1]=min(D(:,jT)); [x2 j2]=max(D(:,jT)); y1=V(j1,jT); y2=V(j2,jT); line([x1 T(jT)],[y1 rt(jT)],'LineStyle','-',... 'LineWidth',0.5,'Color','b') line([T(jT) x2],[rt(jT) V(j2,jT)],'LineStyle',... '-','LineWidth',0.5,'Color','b') end for i=1:m %--shift to max V(i,j)=V(i,j)+rmax-rt(j); end if iPrint==1 plot(D(:,jT),V(:,jT),'or') [x1 j1]=min(D(:,jT)); [x2 j2]=max(D(:,jT)); y1=V(j1,jT); y2=V(j2,jT); line([x1 T(jT)],[y1 rmax],'LineStyle','-',... 'LineWidth',0.5,'Color','r') line([T(jT) x2],[rmax V(j2,jT)],'LineStyle',... '-','LineWidth',0.5,'Color','r') end % IZ-transform %Vmin= min(V(:,j)); Vmax= max(V(:,j)); %Viz(:,j)=(V(:,j)-Vmin)*(IZ2-IZ1)/(Vmax-Vmin) + IZ1; % plot(D(:,jT),Viz(:,jT),'sm') end if Pr(j) == -1 for i=1:m if D(i,j) < T(j) if par==9 V(i,j)=2*rt(j)-V(i,j); else V(i,j)=(2*T(j)-D(i,j)-a(j))/k(j); end end if iPrint==1 hold on plot(D(:,jT),V(:,jT),'ok') [x1 j1]=min(D(:,jT)); [x2 j2]=max(D(:,jT)); y1=V(j1,jT); y2=V(j2,jT); line([x1 T(jT)],[y1 rt(jT)],'LineStyle','-',... 'LineWidth',0.5,'Color','b') line([T(jT) x2],[rt(jT) V(j2,jT)],'LineStyle',...

Appendix: Program Code “Normalization of Multidimensional Data” for MatLab System

287

'-','LineWidth',0.5,'Color','b') end end for i=1:m %--shift to max V(i,j)=V(i,j)-rt(j)+rmin; end if iPrint==1 plot(D(:,jT),V(:,jT),'or') [x1 j1]=min(D(:,jT)); [x2 j2]=max(D(:,jT)); y1=V(j1,jT); y2=V(j2,jT); line([x1 T(jT)],[y1 rmin],'LineStyle','-',... 'LineWidth',0.5,'Color','r') line([T(jT) x2],[rmin V(j2,jT)],'LineStyle',... '-','LineWidth',0.5,'Color','r') end end end end end %-----------------------------------------------------------------------– function [V]=Fun_ReS(V, MM) %-----------------------------------------------------------------------– % % ReS-algorithm: to invert values of attributes % % INPUT ARGUMENTS: % % V - [mxn] Decision Making Matrix after normalization with % linear or not linear method: % Max; Sum; (Max-Min); Vector; Max2; Log & e.t. % MM - [1xn] vector criteriaSign; = 1 for benefit (revenue) attributes; % =-1 for cost attributes (expenditure) % % RETURN: % % V - [mxn] inverse Decision Making Matrix % %-----------------------------------------------------------------------– [m n]=size(V); t1=min(V); %-- min t2=max(V); %-- max for j=1:n if MM(j)==-1 V(:,j)=t2(j) -V(:,j) +t1(j);

288

Appendix: Program Code “Normalization of Multidimensional Data” for MatLab System

end end end %-----------------------------------------------------------------------– function [Viz]=Fun_IZ(V,MM,IZ1,IZ2,par) %-----------------------------------------------------------------------– % IZ-transformation normalized values (IZ-method) % % INPUT ARGUMENTS: % % V - [mxn] Decision Making Matrix after normalization with % linear or not linear method: % Max; Sum; (Max-Min); Vector; Max2; Log & e.t. % MM - [1xn] vector criteriaSign; = 1 for benefit (revenue) attributes; % =-1 for cost attributes (expenditure) % IZ1 - Lower boundary (Lb), value from [0, 1) % IZ2 - Upper boundary (Ub), value from (0, 1] % % if par=1 then IZ1= min(Vmin); IZ2= max(Vmax) % if par=2 then IZ1= max(Vmin); IZ2= max(Vmax) % if par=3 then IZ1= mean(Vmin); IZ2= mean(Vmax) % if par=4 then IZ1= median(Vmin); IZ2= median(Vmax) % % RETURN: % % Viz - [mxn] normalized Decision Making Matrix with IZ-method % %-----------------------------------------------------------------------– [m n]=size(V); Vmax = max(V); Vmin = min(V); if IZ1==0 & IZ2==0 if par==1 IZ1= min(Vmin); IZ2= max(Vmax); end if par==2 IZ1= max(Vmin); IZ2= max(Vmax); end if par==3 IZ1= mean(Vmin); IZ2= mean(Vmax); end if par==4 IZ1= median(Vmin); IZ2= median(Vmax); end end if IZ1 >= IZ2 ' ****** I >= Z ???'

Appendix: Program Code “Normalization of Multidimensional Data” for MatLab System return end %-- bias to 0 point V1=V-repmat(Vmin,m,1); %-- IZ algoritm for j=1:n Viz(:,j)=V1(:,j)*(IZ2-IZ1)/(Vmax(j)-Vmin(j)) + IZ1; end end %-----------------------------------------------------------------------– function [Viz]=Fun_MSx(V,D2, par) %-----------------------------------------------------------------------– % MS transform MS-method % % INPUT ARGUMENTS: % % V - [mxn] Decision Making Matrix after normalization % with MAX-method % MM - [1xn] vector criteria Sign: % 1 for benefit; -1 for cost attributes % par1 = 1 shift to 1, else shift to 0; % par2 =-1 without inversion (non ReS) for TOPSIS et al. % % OUTPUT ARGUMENTS: % % Viz- [mxn] Re-Normalization Decision Making Matrix % %-----------------------------------------------------------------------– [m n]=size(V); k0=max(max(D2)) - min(min(D2)); %-- define Z-I var1 if par==2 k0=max(max(D2)) - max(min(D2)); %-- define Z-I end if par==3 k0=mean(max(D2)) - mean(min(D2)); %-- define Z-I end if par==4 k0=median(max(D2)) - median(min(D2)); %-- define Z-I end Vmean= mean(V); Vstd = std(V); %-- Z-score Z=(V-repmat(Vmean,m,1))./repmat(Vstd,m,1);

289

290

Appendix: Program Code “Normalization of Multidimensional Data” for MatLab System

Z1=Z-repmat(min(min(Z)),m,n); maxZ1=max(max(Z1)); Viz= Z1/maxZ1*k0; %-- без ReS Viz=Viz+repmat(1-max(max(Viz)),m,n); end %-----------------------------------------------------------------------– function V=Fun_NormPQ(D, p,q, par) %-----------------------------------------------------------------------– % % Pwl — PieceWise Linear function of normalization DM % SSp — S Shaped spline function (SSp): % Sgm — Sigmoid function (or Logistic function) of normalization % % % INPUT ARGUMENTS: % % D - [mxn] Decision Making Matrix % % for % p q - threshold for Piecewise linear function for all criteria % example: if n=5 p=[.15 .10 .12 .15 .10]; % q=[.85 .80 .85 .77 .82]; % or: p=ones(1,n)*0.1; q=ones(1,n)*0.9; % % % for Sgm — Sigmoid function (or Logistic function) % p — slope factor: 1=tg(45) % q — point of symmetry center for all criteria: median =.5(+-)0.25 % example: if n=5 p=[1 0.75 0.75 1 .9]; q=[0.5 .45 .55 .5 .5]; % or: p=ones(1,n)*0.95; q=ones(1,n)*0.5; % % % RETURN: % % V - [mxn] normalized Decision Making Matrix with Pwl-method % %-----------------------------------------------------------------------– [m, n]=size(D); t1=min(D); %-- min t2=max(D); %-- max a=t1; k=t2-t1; V0=(D-repmat(a,m,1)) ./ repmat(k,m,1); % Max-Min %------------------------------if par==1 % PwL-function for j=1:n

Appendix: Program Code “Normalization of Multidimensional Data” for MatLab System for i=1:m if V0(i,j)< p(j) V(i,j)=0; elseif V0(i,j)< q(j) V(i,j)=(V0(i,j)-p(j))/( q(j)-p(j)); else V(i,j)=1; end end end end %-----------------------------if par==2 % SSp-function for j=1:n for i=1:m if V0(i,j)