Social Network Large-Scale Decision-Making: Developing Decision Support Methods at Scale and Social Networks (Uncertainty and Operations Research) 9819977932, 9789819977932

This book focuses on the following three key topics in social network large-scale decision-making: structure-heterogeneo

101 12 5MB

English Pages 163 [157] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Authors
Acronyms
Important Symbols
List of Figures
List of Tables
1 Introduction
1.1 Motivation
1.2 Who Should Read This Book and Why?
1.3 Chapter Overview
2 Preliminary Knowledge
2.1 Large-Scale Decision-Making (LSDM)
2.2 Social Network Group Decision-Making (SNGDM)
2.3 Consensus-Reaching Process and Minimum-Cost Consensus
2.4 Proposed Social Network Large-Scale Decision-Making Scenarios
References
3 Trust and Behavior Analysis-Based Structure-Heterogeneous Information Fusion
3.1 Research Background and Problem Configuration
3.2 Analysis of the Selection Behaviors of Attributes and Alternatives
3.3 Procedure of Trust and Behavior Analysis-Based Fusion Method
3.3.1 Constructing the Trust Sociomatrix
3.3.2 Calculating the Distance Between SH Evaluation Information
3.3.3 Generating the Weights of DMs
3.3.4 Fusing SH Individual Evaluation Information
3.4 Discussion and Comparative Analysis
3.4.1 Further Analysis on the Calculation of the Weights of DMs
3.4.2 Dealing with Extreme Decision Situations
3.4.3 Comparative Analysis
3.5 Conclusions
References
4 Trust Cop-Kmeans Clustering Method
4.1 Trust-Similarity Analysis-Based Decision Information Processing
4.2 Trust Cop-Kmeans Clustering Algorithm
4.3 Determining the Weights of Clusters and DMs
4.4 Discussion and Comparative Analysis
4.4.1 TCop-Kmeans Algorithm Versus K-Means Algorithm
4.4.2 Determination of Trust Constraint Threshold
4.4.3 Analysis of upper KK
4.5 Conclusions
References
5 Compatibility Distance Oriented Off-Center Clustering Method
5.1 Preliminaries About PLTSs and Problem Configuration
5.1.1 Probabilistic Linguistic Term Sets (PLTSs)
5.1.2 Configuration of an SNLSDM-PL Problem
5.2 Compatibility Distance Oriented Off-Center Clustering Algorithm
5.2.1 CDOOC Clustering Algorithm
5.2.2 Visualization of CDOOC Clustering Algorithm
5.2.3 Generation of the Weights of Clusters
5.3 Comparative Analysis and Discussion
5.3.1 Comparison with Traditional Clustering Algorithms
5.3.2 Analysis of qq
5.4 Conclusions
References
6 Minimum-Cost Consensus Model Considering Trust Loss
6.1 Problem Configuration
6.2 Consensus Measure and Consensus Cost Measure
6.3 Consensus-Reaching Iteration Based on Improved MCC
6.4 Numerical Experiment
6.5 IMCC Model Versus Different MCC Models
6.6 Analysis of the Effect of Voluntary Trust Loss on the CRP
6.7 Conclusions
References
7 Punishment-Driven Consensus-Reaching Model Considering Trust Loss
7.1 Problem Configuration
7.2 Computing the Consensus Degree
7.3 Logic for Solving CRP Using Trust Loss
7.4 Consensus Scenario Classification and Adjustment Strategies
7.5 Analysis of the Moderating Effect of Trust Loss on the CRP
7.6 Comparison with Other LSDM Consensus Models
7.7 Conclusions
References
8 Practical Applications
8.1 Application of TBA-Based Information Fusion Method in Coal …
8.1.1 Case Description
8.1.2 Decision Process
8.2 Application of PDCRM in Social Capital Selection
8.2.1 Case Description
8.2.2 Using PDCRM to Solve the Problem
8.3 Application of CDOOC Clustering Method in Car-Sharing …
References
9 Conclusions and Future Research Directions
9.1 Findings and Conclusions
9.2 Future Research Directions
Recommend Papers

Social Network Large-Scale Decision-Making: Developing Decision Support Methods at Scale and Social Networks (Uncertainty and Operations Research)
 9819977932, 9789819977932

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Uncertainty and Operations Research

Zhijiao Du Sumin Yu

Social Network Large-Scale Decision-Making Developing Decision Support Methods at Scale and Social Networks

Uncertainty and Operations Research Editor-in-Chief Xiang Li, Beijing University of Chemical Technology, Beijing, China Series Editor Xiaofeng Xu, Economics and Management School, China University of Petroleum, Qingdao, Shandong, China

Decision analysis based on uncertain data is natural in many real-world applications, and sometimes such an analysis is inevitable. In the past years, researchers have proposed many efficient operations research models and methods, which have been widely applied to real-life problems, such as finance, management, manufacturing, supply chain, transportation, among others. This book series aims to provide a global forum for advancing the analysis, understanding, development, and practice of uncertainty theory and operations research for solving economic, engineering, management, and social problems.

Zhijiao Du · Sumin Yu

Social Network Large-Scale Decision-Making Developing Decision Support Methods at Scale and Social Networks

Zhijiao Du Business School Sun Yat-Sen University Shenzhen, China

Sumin Yu College of Management Shenzhen University Shenzhen, China

ISSN 2195-996X ISSN 2195-9978 (electronic) Uncertainty and Operations Research ISBN 978-981-99-7793-2 ISBN 978-981-99-7794-9 (eBook) https://doi.org/10.1007/978-981-99-7794-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore Paper in this product is recyclable.

To our dear daughter Yuxi.

Preface

As an extension of traditional group decision-making (GDM) at the scale of decision-makers (DMs) and social networks, the concept of social network largescale decision-making (SNLSDM) is increasingly used to model realistic decision scenarios. SNLSDM is proposed and attracted attention due to the following subjective and objective conditions: (i) Current decision problems are becoming increasingly complex and the traditional GDM paradigm involving small-scale participants seems to be unable to produce effective solutions. (ii) The development of communication technologies provides hardware support for interaction between large-scale participants. Individual decisions are increasingly influenced by others within the network to which they are connected. SNLSDM is the fusion of LSDM and social network GDM and typically has the following characteristics: (i) The large number of participants leads to the dispersion of individual opinions. (ii) Participants are connected through social networks and influence each other. Therefore, solving an SNLSDM problem generally involves three core steps, including the clustering process, the consensus-reaching process, and the selection process. All these processes are designed to drive a highly consensual decision outcome to be obtained. Research on SNLSDM also basically revolves around the processes. This book focuses on the issues of clustering and consensus building for SNLSDM problems and proposes a series of feasible solutions. The authors of this book are a group of scholars who are engaged in theoretical and methodological as well as applied research on complex large-scale decision-making and intelligent decision-making in big data environments. Recently, they have gradually focused on large-scale decision-making in the social network contexts, which is defined as the SNLSDM in this book. This book brings together the state-of-the-art research published by the authors on the topic of SNLSDM. In order to enhance readability and develop a rational system of decision-making logic, the authors have further refined and deepened the research content and conclusions. The purpose of this book is to explore the impact of social networks on clustering and consensus building. Specifically, two decision attributes can be separated from social networks, namely trust constraints for managing the clustering process and trust loss for regulating the consensus-reaching process. We hope that any researcher interested in SNLSDM vii

viii

Preface

in the fields of management science, computer science, information management, systems engineering, and so forth can become potential readers of this book. The decision methods proposed in this book can be widely used in various economic management decision scenarios, such as risk assessment, consumer behavior prediction, recommendation algorithms, selection decision, etc. We believe that graduate and undergraduate students engaged in SNLSDM research can also find useful inspiration from this book. This book consists of nine chapters: Chap. 1 introduces the motivation and structure, Chap. 2 provides the necessary preliminary knowledge, Chap. 3 proposes a trust and behavior analysis-based fusion method to deal with structure-heterogeneous decision information, Chaps. 4 and 5 put forward two clustering methods, Chaps. 6 and 7 design two consensus models considering trust loss, Chap. 8 applies the proposed methods and models to practical applications, and finally, Chap. 9 ends the book by providing conclusions and future research directions. The first author, Zhijiao Du, and the corresponding author, Sumin Yu, jointly determine the structure of the book. Dr. Zhijiao Du is responsible for writing Chaps. 1–5, and Assoc. Prof. Sumin Yu is responsible for writing Chaps. 6–9 and checking the logic and presentation of the book. We would like to express our sincere thanks to the colleagues in our research team, especially Prof. Jianqiang Wang, Prof. Xudong Lin, and Prof. Hanyang Luo, for their contributions and consultation on this book. We would also like to thank Prof. Xuanhua Xu for his academic guidance on this book. We would like to express our deep gratitude to the production team of Springer Nature for their support in publishing this book, especially Senior Editor Emily Zhang. This book is supported by the National Natural Science Foundation of China (nos. 72301300, 71901151, 71971217), the Major Project for National Natural Science Foundation of China (no. 72293574), and the Project funded by China Postdoctoral Science Foundation (no. 2023M733996). Due to the limitations of the authors’ knowledge and time, omissions or deficiencies in the book are inevitable. Readers are welcome to criticize and correct. Shenzhen, China September 2023

Zhijiao Du Sumin Yu

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Who Should Read This Book and Why? . . . . . . . . . . . . . . . . . . . . . . . 1.3 Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 3

2 Preliminary Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Large-Scale Decision-Making (LSDM) . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Social Network Group Decision-Making (SNGDM) . . . . . . . . . . . . . 2.3 Consensus-Reaching Process and Minimum-Cost Consensus . . . . . 2.4 Proposed Social Network Large-Scale Decision-Making Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 7 9 11

3 Trust and Behavior Analysis-Based Structure-Heterogeneous Information Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Research Background and Problem Configuration . . . . . . . . . . . . . . . 3.2 Analysis of the Selection Behaviors of Attributes and Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Procedure of Trust and Behavior Analysis-Based Fusion Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Constructing the Trust Sociomatrix . . . . . . . . . . . . . . . . . . . . . 3.3.2 Calculating the Distance Between SH Evaluation Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Generating the Weights of DMs . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Fusing SH Individual Evaluation Information . . . . . . . . . . . . 3.4 Discussion and Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Further Analysis on the Calculation of the Weights of DMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13 15 21 21 25 29 30 32 37 39 40 41

ix

x

Contents

3.4.2 Dealing with Extreme Decision Situations . . . . . . . . . . . . . . . 3.4.3 Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43 44 46 47

4 Trust Cop-Kmeans Clustering Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Trust-Similarity Analysis-Based Decision Information Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Trust Cop-Kmeans Clustering Algorithm . . . . . . . . . . . . . . . . . . . . . . 4.3 Determining the Weights of Clusters and DMs . . . . . . . . . . . . . . . . . . 4.4 Discussion and Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 TCop-Kmeans Algorithm Versus K-Means Algorithm . . . . . 4.4.2 Determination of Trust Constraint Threshold . . . . . . . . . . . . . 4.4.3 Analysis of K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51

5 Compatibility Distance Oriented Off-Center Clustering Method . . . . 5.1 Preliminaries About PLTSs and Problem Configuration . . . . . . . . . . 5.1.1 Probabilistic Linguistic Term Sets (PLTSs) . . . . . . . . . . . . . . 5.1.2 Configuration of an SNLSDM-PL Problem . . . . . . . . . . . . . . 5.2 Compatibility Distance Oriented Off-Center Clustering Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 CDOOC Clustering Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Visualization of CDOOC Clustering Algorithm . . . . . . . . . . . 5.2.3 Generation of the Weights of Clusters . . . . . . . . . . . . . . . . . . . 5.3 Comparative Analysis and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Comparison with Traditional Clustering Algorithms . . . . . . . 5.3.2 Analysis of q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71 71 72 73

6 Minimum-Cost Consensus Model Considering Trust Loss . . . . . . . . . . 6.1 Problem Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Consensus Measure and Consensus Cost Measure . . . . . . . . . . . . . . . 6.3 Consensus-Reaching Iteration Based on Improved MCC . . . . . . . . . 6.4 Numerical Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 IMCC Model Versus Different MCC Models . . . . . . . . . . . . . . . . . . . 6.6 Analysis of the Effect of Voluntary Trust Loss on the CRP . . . . . . . . 6.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85 85 86 86 91 92 94 96 97

51 54 58 63 63 66 67 68 70

74 74 76 77 78 79 80 82 82

7 Punishment-Driven Consensus-Reaching Model Considering Trust Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 7.1 Problem Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 7.2 Computing the Consensus Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 7.3 Logic for Solving CRP Using Trust Loss . . . . . . . . . . . . . . . . . . . . . . . 100

Contents

xi

7.4 Consensus Scenario Classification and Adjustment Strategies . . . . . 7.5 Analysis of the Moderating Effect of Trust Loss on the CRP . . . . . . 7.6 Comparison with Other LSDM Consensus Models . . . . . . . . . . . . . . 7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

102 106 108 110 111

8 Practical Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Application of TBA-Based Information Fusion Method in Coal Mine Safety Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Case Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Decision Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Application of PDCRM in Social Capital Selection . . . . . . . . . . . . . . 8.2.1 Case Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Using PDCRM to Solve the Problem . . . . . . . . . . . . . . . . . . . . 8.3 Application of CDOOC Clustering Method in Car-Sharing Service Provider Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

113 113 113 118 125 125 126 131 138

9 Conclusions and Future Research Directions . . . . . . . . . . . . . . . . . . . . . . 139 9.1 Findings and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 9.2 Future Research Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

About the Authors

Zhijiao Du is a Faculty-Appointed Assistant Professor and Postdoctoral Fellow at the Business School, Sun Yat-Sen University, Shenzhen, China. Dr. Du received his B.S. degree in Management from Hainan University in 2012, his M.S. degree in Management from Central South University in 2016, and his Ph.D. degree in Management from Sun Yat-Sen University in 2022. He chaired a project of the National Natural Science Foundation of China, a project funded by China Postdoctoral Science Foundation, and a project of Guangdong Provincial Philosophy and Social Science Planning. His research interests include social network big data analysis, intelligent group decision-making, largescale decision-making and consensus, digital supply chain finance, corporate venture capital, etc. He has now published more than 20 academic articles in top journals and conference proceedings, including Decision Support Systems, IEEE Transactions on Fuzzy Systems, IEEE Transactions on Computational Social Systems, Information Fusion, Information Sciences, KnowledgeBased Systems, Computers and Industrial Engineering, Group Decision and Negotiation, among others. One of his articles was selected in ESI—Global Database of Highly Cited Papers in Computer Science. He coauthored a book published in Springer. His h-index is 11 with more than 860 citations received in Google Scholar. He serves as a reviewer in many top-tier international journals in related areas to decision analysis and supply chain management.

xiii

xiv

About the Authors

Sumin Yu is an Associate Professor, Distinguished Associate Research Fellow, and Master Supervisor at the College of Management, Shenzhen University, Shenzhen, China. Yu received her Ph.D. degree in Management from Central South University in 2018. Her research areas include electronic commerce, information management, decision theory and methods, largescale decision-making and consensus, big data decision, social network analysis, tourism management, etc. She chaired a project of National Natural Science Foundation. She has published more than 30 international journal papers in top journals and conference proceedings, including IEEE Transactions on Fuzzy Systems, IEEE Transactions on Computational Social Systems, Information Fusion, Information Sciences, Knowledge-Based Systems, Applied Soft Computing, Computers and Industrial Engineering, Group Decision and Negotiation, International Transactions in Operational Research, among others. She co-authored a book published in Springer. Her h-index is 15 with more than 1100 citations received in Google Scholar. A total of four articles were selected in ESI—Global High Citation Paper Database, among which two articles were selected as hot papers. She serves as a reviewer in many top-tier international journals in related areas to fuzzy decision-making, soft computing, and large-scale decision-making.

Acronyms

CDOOC CL CRP DM(s) GDM IMCC LSDM MCC ML OWA PDCRM PLTS SH-MAGDM SNA SNGDM SNLSDM TCop-Kmeans

Compatibility distance oriented off-center Cannot-link Consensus-reaching process Decision-maker(s) Group decision-making Improved minimum-cost consensus Large-scale decision-making Minimum-cost consensus Must-link Ordered weighted averaging Punishment-driven consensus-reaching model Probabilistic linguistic term set Structure-heterogeneous multi-attribute group decision-making Social network analysis Social network group decision-making Social network large-scale decision-making Trust Cop-Kmeans

xv

Important Symbols

  E = e1 , e2 , . . . , eq X = {x1 , x2 , . . . , xm } A = {a1 , a2 , . . . , an } Vl = (vil j )m×n P l = ( pil j )m×m ol ol costl consensus(·)   X l = x1l , x2l , . . . , xml   Al = a1l , a2l , . . . , anl DC Il sdlh atsolh Con = (Con lh )q×q Con = = (Con =lh )q×q coh k otsok CCk T LCk GC L t GC L α T C, T C, β1 , β2 , β3 , ρ

A set of DMs A set of alternatives or an alternative pond A set of attributes or an attribute pond Individual opinion provided by DM el Individual preference provided by DM el Individual opinion without involving alternatives provided by DM el Adjusted opinion of DM el Unit cost of adjusting the opinion of DM el Measure of group consensus Individual set of alternatives associated with DM el Individual set of attributes associated with DM el Decision clarity index of DM el Similarity degree between individual opinions ol and oh Average trust degree of DMs el and eh Constraint matrix including CLs and MLs Constraint matrix including CLs only Cohesion of cluster C k Overall trust degree of cluster C k Consensus cost of moving the opinion of C k from h k to h k Trust loss cost Group consensus degree in the t-th iteration Group consensus threshold Type-α consensus constraint Parameters

xvii

List of Figures

Fig. 1.1 Fig. 2.1 Fig. 3.1 Fig. 3.2 Fig. 3.3 Fig. 3.4 Fig. 3.5 Fig. 4.1 Fig. 4.2 Fig. 4.3 Fig. 4.4 Fig. 4.5 Fig. 4.6 Fig. 5.1 Fig. 5.2 Fig. 6.1

Fig. 6.2 Fig. 6.3 Fig. 7.1 Fig. 7.2 Fig. 7.3

The structure of this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The evolutionary paths of SNLSDM . . . . . . . . . . . . . . . . . . . . . . . . Relationship between three cases of selection behaviors of attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rules for determining to which case a matrix element is judged to belong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Logical diagram of selecting references to assist in the distance measures of SH evaluation information . . . . . . . . . Weight distributions by using the proposed weight determination method under different decision scenarios . . . . . . . Decision clarity indexes of DMs . . . . . . . . . . . . . . . . . . . . . . . . . . . Visual social networks of four DMs in Example 4.1 . . . . . . . . . . . An example of cannot-link graph in a social network . . . . . . . . . . Distribution of function f (x, y) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Weights of clusters as βi (i = 1, 2, 3) vary . . . . . . . . . . . . . . . . . . . The relationships of logical statements described in Sect. 4.4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simulation results as the trust constraint threshold increases from 0 to 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Visualization of forming the first cluster . . . . . . . . . . . . . . . . . . . . . Number of clusters when q varies . . . . . . . . . . . . . . . . . . . . . . . . . . Simulation results of total consensus costs obtained using different types of MCC models when setting different values of  and α . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Result of which set a cluster belongs to when ξ varies . . . . . . . . . Results of some parameters related to the CRP when ξ varies . . . The logical diagram of the moderating effect of trust loss on the CRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphic display of four consensus scenarios . . . . . . . . . . . . . . . . . Graphic display of proposed adjustment strategies . . . . . . . . . . . .

4 14 26 28 32 41 42 56 57 61 62 66 67 77 81

93 95 96 101 103 105

xix

xx

Fig. 7.4 Fig. 8.1 Fig. 8.2 Fig. 8.3

List of Figures

Visualization of the impact of trust loss on consensus cost, trust degree, and consensus degree . . . . . . . . . . . . . . . . . . . . . . . . . Distribution map of coal mine accidents in Hunan, China (January 1–August 31, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Visualization of an undirected social network . . . . . . . . . . . . . . . . Visualization of cannot-links with the condition that T C = 0.25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

107 114 126 129

List of Tables

Table 3.1 Table 3.2 Table 3.3 Table 3.4 Table 3.5 Table 3.6 Table 3.7 Table 3.8 Table 3.9 Table 3.10 Table 3.11 Table 3.12 Table 4.1 Table 4.2 Table 4.3 Table 4.4 Table 5.1 Table 5.2

Classification of the research on heterogeneous evaluation information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Classification of DMs’ selection behaviors regarding an attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Father’s selection behaviors regarding attributes and alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary of reference selection . . . . . . . . . . . . . . . . . . . . . . . . . Weights of DMs obtained by using Example 3.3 . . . . . . . . . . . . Weights of DMs obtained by using Example 3.4 . . . . . . . . . . . . Initial rankings of individual alternatives . . . . . . . . . . . . . . . . . . The collective set of alternatives . . . . . . . . . . . . . . . . . . . . . . . . . The collective set of attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . The R (c,0) (xi(c,0) )(i = 1, . . . , 10) . . . . . . . . . . . . . . . . . . . . . . . . . Importance degrees of attributes A(c,0) . . . . . . . . . . . . . . . . . . . . Importance degrees of alternatives X (c,0) . . . . . . . . . . . . . . . . . . Clustering results with using TCop-Kmeans algorithm and traditional K-means algorithms . . . . . . . . . . . . . . . . . . . . . . Decision results with using the TCop-Kmeans algorithm and traditional K-means algorithms . . . . . . . . . . . . . . . . . . . . . . Clustering structures under different values of K using the TCop-Kmeans algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decision results under different values of K using the TCop-Kmeans algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . Factors related to calculating the weights of the clusters . . . . . . Comparison results by using different clustering algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22 25 29 31 43 43 45 45 45 45 45 46 64 65 68 69 77 79

xxi

xxii

Table 5.3 Table 6.1 Table 6.2 Table 6.3 Table 7.1 Table 8.1 Table 8.2 Table 8.3 Table 8.4 Table 8.5 Table 8.6 Table 8.7 Table 8.8 Table 8.9 Table 8.10 Table 8.11 Table 8.12 Table 8.13 Table 8.14 Table 8.15 Table 8.16

List of Tables

Comparison of the main advantages and disadvantages of four clustering algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . Consensus result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of optimal solutions using different types of MCC models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of feedback coefficient using different types of MCC models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decision results using different LSDM consensus models . . . . Statistics of coal mine accidents in Hunan, China (January 1–August 31, 2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . Predetermined attribute pond . . . . . . . . . . . . . . . . . . . . . . . . . . . Selection results of attributes and alternatives for e1 . . . . . . . . . The incomplete opinion of e1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selection results of attributes and alternatives for e2 . . . . . . . . . The incomplete opinion of e2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selection results of attributes and alternatives for e3 . . . . . . . . . The incomplete opinion of e3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selection results of attributes and alternatives for e4 . . . . . . . . . The incomplete opinion of e4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selection results of attributes and alternatives for e5 . . . . . . . . . The incomplete opinion of e5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selection results of attributes and alternatives for e6 . . . . . . . . . The incomplete opinion of e6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clustering results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clustering results when setting q = 2, q = 5 and C T = 0.77 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80 91 92 92 109 115 115 115 116 116 116 116 117 117 117 117 117 118 118 129 135

Chapter 1

Introduction

Abstract This chapter provides a concise introduction to the book by explaining the motivation and structural outline. The motivation section describes the origins, scope, focus, and expectations of the book. The section “Who Should Read This Book and Why” specifies potential readers, as well as the knowledge and suggestion needed to read the book.

1.1 Motivation Social network large-scale decision-making (SNLSDM) is a product of the development of the times. It is the latest decision-making scenario arising from the continuous evolution of software and hardware and the superposition of various subjective and objective factors. In recent years, the rapid development of communication technologies, represented by fifth-Generation Mobile Communication Technology, has realized the timeliness and cross-carrier of information transmission and provided technical support for interaction among large-scale decision-makers (DMs). On the other hand, the explosive development of social network service software, represented by Twitter, Facebook, WeChat, etc., makes it impossible for human beings to exist in isolation on the earth, but more or less in social networks and interact with each other. In general, SNLSDM is considered to be a fusion of LSDM and social network group decision-making (SNGDM), so it combines all the features of the two mentioned above. One of the most significant features of SNLSDM is the involvement of large-scale DMs, which is also a remarkable characteristic of LSDM. The number of DMs may range from dozens to hundreds to accommodate different decision-making situations. As a result, clustering is regarded as one of the key processes in solving an SNLSDM problem, through which the dimension of large-scale DMs is reduced. Clustering facilitates the interaction between individuals and the formation of unified opinions/preferences. This book will discuss the clustering analysis involving various measurement attributes and explore the influence of different measurement attributes on clustering. The second significant characteristic of SNLSDM, which is also the main feature of SNGDM, is that individual decisions are vulnerable to the influence of other © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Z. Du and S. Yu, Social Network Large-Scale Decision-Making, Uncertainty and Operations Research, https://doi.org/10.1007/978-981-99-7794-9_1

1

2

1 Introduction

acquaintances/friends who are connected to the same network. In our opinion, the following important issues concerning SNLSDM need further exploration: (1) Definition and complete problem configuration of SNLSDM. As the integration of LSDM and SNGDM, the definition, characteristics, problem configuration and solution framework of SNLSDM need to be clarified and specified. These are the beginning and foundation of solving SNLSDM problems. (2) Clustering analysis based on multiple measurement attributes. Traditional clustering methods typically rely on similarity measures between individual opinions/preferences. We argue that the social relationships among DMs are an important clustering measurement attribute in SNLSDM. Further, we consider that opinion similarity and social relationship are both important measurement attributes and play different roles in clustering. Therefore, it is necessary to propose a clustering method that includes multiple measurement attributes. (3) The consensus-reaching process involving multiple consensus costs, especially considering the trust loss cost. The influence of social relationships on the CRP should be further explored. We hold that a DM/cluster with high trust but low consensus can reduce the cost due to opinion adjustment by active trust loss. The authors of this book have a long history of work in the field of large-scale decision making, particularly theoretical and applied research in clustering analysis and consensus building. Recently, we began to pay attention to LSDM in the social network environment, and a series of valuable achievements have been made. Therefore, this book brings together the most advanced research published by the authors on SNLSDM. In order to facilitate different readers to understand the research results and findings, we rearrange the existing research results according to certain logic, and add necessary explanations. A separate chapter is presented to demonstrate how to apply the proposed methods and models to practical decision-making problems. We strongly recommend that potential readers have a good research foundation in fuzzy soft computing, traditional clustering algorithms, basic mathematics knowledge, and other related preliminaries. The case studies and numerical experiments used in this book are mainly processed by MATLAB and Microsoft EXCEL software.

1.2 Who Should Read This Book and Why? We encourage researchers, students and enterprises engaged in social network analysis, group decision-making, multi-agent collaborative decision-making and largescale data processing to pay attention to the proposals presented in this book. The research object of this book is the SNLSDM problem, which deals with two important topics, namely clustering analysis and consensus building. First of all, SNLSDM can model many practical decision-making problems, such as democratic election, public participation decision-making, and emergency decision-making, etc. We hope that after reading this book, readers can have the following harvest and enlightenment:

1.3 Chapter Overview

3

(1) Mastering some novel clustering algorithms considering multiple measurement attributes, and trying to use the proposed clustering methods to deal with real decision data, so as to get certain management enlightenment and economic value analysis results. (2) Understanding the internal mechanism of consensus building in the SNLSDM process, and paying attention to the moderating effect of social relationships on the consensus-reaching process. (3) Applying the proposed clustering methods and consensus models to real-world cases. For students, especially postgraduates, reading this book can help them master some commonly used statistics, simulation and graphics software, such as MATLAB, Microsoft EXCEL, CorelDRAW Graphics Suite and so forth. This book constructs a new decision-making scenario and proposes some novel solutions. These laid a theoretical foundation for the subsequent related research. For enterprises, the proposals in this book provide them with a new perspective on exploring and exploiting the economic value of business data. Clustering algorithms based on multiple measurement attributes enable enterprises parse data from various perspectives according to different objectives.

1.3 Chapter Overview In general, this book selects three hot topics in SNLSDM, including the fusion of structure-heterogeneous evaluation information, clustering analysis, and consensus building. Chapter 3 proposes a trust and behavior analysis-based information fusion to deal with structure-heterogeneous individual opinions. Chapters 4 and 5 develop different clustering methods, all of which consider the influence of social relationship on clustering. Chapters 6 and 7 focus on the moderating effect of social relationship on consensus building. More importantly, Chap. 8 presents the application of the proposed methods and models in practical cases. The information fusion method proposed in Chap. 3 is the basis for Chaps. 4–7. The structure of this book is presented in Fig. 1.1. The detailed chapter overview is presented as follows. . Chapter 2: Preliminary Knowledge. This chapter introduces the basic knowledge of large-scale decision making, social network group decision-making, and consensus-reaching process. This chapter also expounds the problem configuration of social network large-scale decision making. . Chapter 3: Trust and Behavior Analysis-Based Structure-Heterogeneous Information Fusion. This chapter proposes a novel structure-heterogeneous information fusion method by analyzing the decision-makers’ selection behaviors regarding alternatives and attributes, and using social relationships as a reference for distance measurements.

4

1 Introduction

Fig. 1.1 The structure of this book

. Chapter 4: Trust Cop-Kmeans Clustering Method. This chapter puts forward a semi-supervised clustering algorithm, which takes the similarity between individual opinions as the core element of clustering, but whether any two decisionmakers can be finally divided together depends on the trust constraints based on social networks. . Chapter 5: Compatibility Distance Oriented Off-Center Clustering Method. In this chapter, the compatibility distances among decision-makers are obtained by integrating social networks and opinion distance measurements, which will be used as the only basis for clustering. The most important characteristic of the method is that it integrates distance measures and trust measures, and specifies the upper and lower limits on the number of decision-makers in the cluster. . Chapter 6: Minimum-Cost Consensus Model Considering Trust Loss. This chapter focuses on the impact of trust loss on the operation of classical minimum-

1.3 Chapter Overview

5

cost consensus and proposes an improved minimum-cost consensus model. We hold that the decision-maker/cluster with high trust and low consensus can reduce the consensus cost by voluntarily losing some trust. . Chapter 7: Punishment-driven Consensus-Reaching Model Considering Trust Loss. A punishment-driven consensus-reaching process is designed, which identifies four categories of consensus scenarios (highhigh, highlow, lowhigh, lowlow) through consensus measures and trust measures. Different adjustment strategies are established, and the moderating effect of trust loss on consensus reaching and consensus cost is investigated. . Chapter 8: Practical Applications. In this chapter, the proposed methods and models are applied to practical decision-making scenarios, such as social capital selection in project construction, coal mine safety assessment, and car-sharing service provider selection. . Chapter 9: Conclusions and Future Research Directions. Some important conclusions and future research directions regarding SNLSDM are presented.

Chapter 2

Preliminary Knowledge

Abstract This chapter introduces the basic knowledge regarding large-scale decision-making, social network group decision-making and consensus building, which are fundamental to understanding the clustering methods and consensus models proposed in this book. We also describe the problem configuration of social network large-scale decision-making. Keywords Large-scale decision-making (LSDM) · Social network group decision-making (SNGDM) · Social network large-scale decision-making (SNLSDM) · Clustering · Consensus-reaching process (CRP) · Minimum-consensus cost (MCC)

2.1 Large-Scale Decision-Making (LSDM) Due to the rapid growth of data volume and the increasing complexity of decision-making process, it is difficult for a single decision-maker (DM) to be competent in modern decision-making problems. As a consequence, the model of group decision-making (GDM) arises at the historic moment and is widely used in the selection and evaluation of alternatives in all walks of life. GDM aims to achieve a common solution to a decision problem involving no less than two DMs, which has become a basic research topic in the field of decision science. The concept of GDM extends a series of different decision scenarios, such as GDM with complex information expressions [1–5], large-scale decision making [6–11], GDM under social networks [12–15], multi-attribute/multi-criteria GDM [16–19], multi-objective GDM [20, 21], fuzzy/stochastic GDM [22–24], and a mixture of the above two or more decision scenarios [25–29]. Traditional GDM is considered as a group discussion and decision-making process, involving a small number of DMs (e.g., no more than 5 people), and the complexity is usually not very high (e.g., a family’s house purchasing decision). As the rapid development of information technology, people are faced with the explosive growth of information. A complex decision problem often involves many aspects of expertise and objectively drives a large number of DMs to participate. On the © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Z. Du and S. Yu, Social Network Large-Scale Decision-Making, Uncertainty and Operations Research, https://doi.org/10.1007/978-981-99-7794-9_2

7

8

2 Preliminary Knowledge

other hand, the development of timely communication technology provides technical support and physical carrier the interactions between large-scale DMs. Large-scale decision-making (LSDM) appears in the above-mentioned subjective and objective, hardware and software conditions. An LSDM problem is defined as a decision situation in which no fewer than twenty DMs participate to conclude a common solution [10, 30, 31]. So far, there are no strict and definitive requirements for the number of DMs. For specific types of decision-making problems, some literatures believe that the decision-making with more than eleven DMs can be considered as LSDM [32, 33]. However, some cases in the literature involve more than 50 decision makers (e.g., [34, 35]). Therefore, Wu and Xu [36] suggest that the number of DMs can vary from dozens to thousands according to the different context, situation, and difficulty of the actual LSDM scenarios. At present, the research on LSDM mainly focuses on the following topics: • The fusion of complex evaluation information, including the following complex information expression forms, such as linguistic variables and their extended forms, complex fuzzy numbers, stochastic variables, and so forth; • Clustering analysis, which reduces the dimensionality of DMs by dividing the original large group into several smaller-size subgroups/clusters/communities; • Consensus building, which aims to reduce differences of opinion, promote the convergence of individual opinions, and obtain a highly consensual decision-making result; • Exploring the influence of social relationships on the decision-making process; • Determining the weights of clusters and DMs; • Considering the behaviors/attitudes of DMs towards the decision-making process; • Developing decision support systems; • Estimating and determining some important parameters, including the clustering threshold, consensus threshold, unit cost in minimum-consensus cost models, etc. It is important to emphasize that all of the above topics share the same goals, including facilitating the interaction among DMs, increasing decision satisfaction, and ensuring smooth implementation of the decision-making process. Generally, solving an LSDM problem requires the following processes [37–39]: (1) Gathering and pre-processing decision information. Individual evaluation information (and possibly trust relationships) provided by the DMs are gathered and normalized as valid input for the following steps. (2) Clustering process and weight generation. The original large group is divided into several smaller-size clusters according to single or multiple measurement attributes. Then, the weight of clusters and included DMs are calculated. (3) Consensus-reaching process. A consensus model including feedback mechanism is implemented to guide the convergence of individual opinions and improve the level of group consensus.

2.2 Social Network Group Decision-Making (SNGDM)

9

(4) Selection process. This process consists of two steps, including aggregation and exploitation. The former aims to obtain the group opinion by gathering clusters’ opinions, while the latter calculates the optimal solution according to the obtained group opinion.

2.2 Social Network Group Decision-Making (SNGDM) In recent years, with the rapid development of social networks, typical social applications, such as Twitter, Facebook and WeChat, have been deeply embedded in people’s daily life. Individual decisions tend to be susceptible to the influence of others connected with them in the same network. Several studies have shown that social networks play an important role in the decision-making process, such as providing comments and suggestions [40–42], sharing recommendations [43, 44], influencing interactions [45], and moderating the consensus-reaching process [26, 30]. Social network group decision-making (SNGDM) is defined as a decision situation in which a group of DMs connected to each other through a social network participate and choose the best solution from a given set of alternatives [46–48]. The most important difference between SNGDM and traditional GDM is that the DMs in SNGDM provide two types of decision information, namely, the evaluation information about the alternatives (which is the same as traditional GDM), and the social relationships between DMs. Therefore, the influence of social networks on the decision-making process has become one of the main research topics about SNGDM. Social network analysis (SNA) is a useful methodology to study the relationships between social entities, such as members of organizations, corporations, or nations [49, 50]. It can examine the structure and location properties of a social group, and guide community identification or network partitioning. A typical SNA model contains three main elements: the set of actors, the relationships themselves, and the attributes of the actors [51]. According to information transitivity, social relationships are classified into three types: direct, indirect, and irrelevant relationships [52]. There are many forms of expression in social relationships. This book selects one of the commonly used forms, trust, as the embodiment of social relationships. Different representation schemes in SNA are presented as follows: ⎛ ⎞ −1 1 0 ⎜0 − 0 1 ⎟ ⎟ • Sociomatrix: . S = ⎜ ⎝1 0 − 0 ⎠. 1 0 1 −

• Directed graph:

10

2 Preliminary Knowledge

• Algebraic: .e1 Re2 , .e1 Re3 , .e2 Re4 , .e3 Re1 , .e4 Re1 , .e4 Re3 , where .el Reh represents the trust .el assigns to .eh . Note that the above-presented sociomatrix is a binary relationship, which is difficult to model the uncertainty of the relationship representation in the social network. Thus, this book follows one type of social network, called the trust social network, in which users explicitly express their statements of trust, such as trusting someone “very much” or “more or less”, rather than “trusting” or “not trusting.” The original definition of trust in the trust social network is presented below. Definition 2.1 ([50] Trust function) A tuple .λ = (t, d) with the first component .t being a trust degree and the second component .d being a distrust degree, such that .t, d ∈ [0, 1], is referred to as a trust function. Let . K D = 1 − t − d be the knowledge degree representing the certainty about a DM’s statements of trust and distrust. Clearly, .0 ≤ K D ≤ 1. A smaller value of . K D indicates more confidence in the corresponding statements of trust and distrust. Wu et al. [50] defined the trust score as .T S(λ) = (t − d + 1)/2. Clearly, .0 ≤ T S(λ) ≤ 1. A trust score represents the normalized dominance the trust degree has over the corresponding distrust degree of a trust function. Meanwhile, the order relation of trust functions was defined in [50], which can be used as a measurement attribute to calculate the weights of DMs. If the DM is assumed to have complete confidence in the statements of trust and distrust, that is, . K D = 0, then a more concise definition of the trust sociomatrix follows. Definition 2.2 ([53]) A fuzzy trust sociomatrix .T SoM = (tsolh )q×q on . E is a relation in . E × E with membership function .μT SoM : E × E → [0, 1], and .μT SoM (el , eh ), where .tsolh denotes the trust degree that DM .el assigns to DM .el . For simplicity, the fuzzy trust sociomatrix is referred to as the trust sociomatrix in this book. The definition of trust degree in Definition 2.2 will be used in this book. This means that we suppose DMs have complete knowledge of the statements of trust degrees. Example 2.1 The trust relationships among four DMs are gathered by the trust sociomatrix .T SoM = (tsolh )4×4 , where ⎛

− ⎜ 0.3 . T SoM = ⎜ ⎝ 0.2 1

0.8 − 0.7 0.4

0.4 0.7 − 0.9

⎞ 0 0.3 ⎟ ⎟. 0.5 ⎠ −

Example 2.1 shows a complete trust sociomatrix. A complete trust sociomatrix is defined as the sociomatrix in which any DM in a group provides their trust degrees to all other DMs. In some real-world decisions, one DM may not be able to provide an explicit trust degree to another DM. In this case, trust propagation can be achieved through an indirect chain of trusted third party partners. Trust relationships can be divided into three categories and have different solutions to obtain a complete trust sociomatrix [49, 52]:

2.3 Consensus-Reaching Process and Minimum-Cost Consensus

11

• Direct relationship. There is a directed link from .el to .eh in the social network. This indicates that .el provides a clear trust degree to .eh . • Indirect relationship. There is no directed link from .el to .eh in the social network. However, .el can build a new link to .eh through the third party .ek . In this case, .el does not typically know .eh , but we can obtain reliable relationship information from .el to .eh by the direct relationship information between their third party. • Irrelevant relationship. There is neither a direct nor indirect relationship between .el and .eh . In this case, .el is considered to have an irrelevant relationship with .eh . Du et al. [49] set both the degrees of trust and distrust between them as 0.5. The value of 0.5 represents a neutral attitude towards trust. It is generally accepted that any trust sociomatrix has at least one of the three types of relationships mentioned above.

2.3 Consensus-Reaching Process and Minimum-Cost Consensus The consensus-reaching process (CRP) is a stage that must be implemented in almost all GDM problems, through which the high-consensus decision results recognized by the majority can be achieved. The CRP is a group discussion and iterative process designed to converge individual opinions into a group opinion. Implementing a CRP is even more important and urgent in LSDM, where the opinions provided by large-scale DMs are prone to wide variation and conflict. Consensus was defined originally as a state of complete agreement among DMs. However, achieving unanimous agreement is unnecessary and costly in practical decisions. As a consequence, the concept of ‘soft consensus’ was proposed, which requires most (but perhaps not all) of the DMs to agree on the most important alternatives [54–56]. The general scheme of a CRP is illustrated below [37, 38]. (1) Gathering opinions. The individual opinions and group opinion are gathered. (2) Consensus measure. According to distance measures or similarity measures, individual consensus levels and group consensus level are calculated. The consensus measure can be classified into two categories: the distance to the group opinion, and the distance among DMs [10, 34, 57]. (3) Consensus control. If the obtained group consensus is greater than a preset consensus threshold, the selection process is proceed; otherwise the consensus iteration is implemented. (4) Consensus iteration. Usually a feedback mechanism is activated to determine the direction and degree of adjustment of the identified individual opinion. Afterwards, another round of iteration starts by gathering opinions again. It can be concluded that a CRP usually consists of two core processes: consensus measure and consensus iteration [58, 59]. The consensus measure is based on distance or similarity measures, which is used to determine whether the current group

12

2 Preliminary Knowledge

consensus meets the requirements. It can be classified into two categories: the distance to the collective opinion, and the distance among DMs [10, 60]. Consensus iteration usually introduces a feedback mechanism through which individual opinions converge to the group opinion. Soft consensus has been applied in a variety of CRPs, which can be roughly classified as follows: • • • • • • • • •

CRPs with complex evaluation information [61–64], CRPs based on consistency and consensus measures [60, 65–68], CRPs in LSDM problems [8, 69–73], CRPs in SNGDM problems [74–77], CRP considering the behaviors/attitudes of DMs [26, 58, 78–81], CRPs with minimizing cost or adjustments [82–86], Consensus support systems [87–89], CRPs with other decision technologies [90–92], Practical application of CRPs [32, 93–98].

In many decision situations, opinion adjustments imply cost and resource consumption; however, resources for consensus building are generally limited, especially in resource-constrained GDM problems. Consequently, the minimum-adjustment or minimum-cost consensus has been developed and applied in various GDM scenarios to improve the consensus efficiency. Ben-Arieh and Easton [99] proposed the concept of consensus cost and developed a linear-time algorithm to obtain the minimum total cost, but failed to present a specific optimization model, and did not consider the aggregation function. Dong et al. [100] presented the consensus model with minimum adjustments in a linguistic environment, which could be extended by using different aggregation functions. Zhang et al. [101] first presented a specific optimization model with respect to the minimum-cost consensus (MCC) in [99], and then proposed a more comprehensive MCC model of by introducing aggregation functions. Definition 2.3 ([101]) Let .(o1 , . . . , oq ) be the original opinions given by a set of DMs . E = {e1 , . . . , eq }. Suppose that after the CRP, the DMs’ opinions are adjusted to .(o1 , . . . , oq ), and the group opinion .π is obtained based on the adjusted opinions. Let.(cost1 , . . . , costq ) be the cost of moving each DM’s opinion 1 unit. The parameter .∈ is the maximum acceptable distance of each DM to the group opinion. The MCC model based on a linear cost function is given as follows: min

q ∑

costl · |ol − ol |

l=1

{

.

s.t.

o = F(o1 , . . . , oq ) |ol − o| ≤ ∈, l = 1, 2, . . . , q

(2.1)

where . F is an aggregation function. Minimum-adjustment consensus and minimum-cost consensus are two very similar concepts, if the unit cost of opinion adjustment is not considered. In this book, we

2.4 Proposed Social Network Large-Scale Decision-Making Scenarios

13

use the term of minimum-cost consensus to highlight the cost changes caused by the adjustment of opinions. Note, however, that this book does not examine how the unit cost is determined. For issues related to the unit cost, please refer to [80]. It can be found that Eq. (2.1) only involves type-.∈ consensus constraints, i.e., .|ol − o| ≤ ∈, for any.el ∈ E. Type-.∈ constraints are the most common and basic consensus constraints, which have attracted great attention in the field of decision science. Gong et al. [102] proposed two types of MCC models concerning all the individuals and one particular individual. Wu et al. [48] developed a minimum adjustment cost feedback mechanism to improve the consensus in SNGDM problems. Zhang et al. [103] designed a consensus mechanism with maximum-return modifications and minimum-cost feedback from the perspective of game theory. Nevertheless, these models include only a consensus measure, that is, the distance of each DM to the collective opinion, and ignore the minimum agreement among all DMs. Labella et al. [104] proposed a comprehensive MCC model considering the group consensus: min

q ∑

costl · |ol − ol |

l=1

.

⎧ ⎨ o = F(o1 , . . . , oq ) |ol − o| ≤ ∈, l = 1, 2, . . . , q s.t. ⎩ consensus(o1 , . . . , oq ) ≥ α

(2.2)

where .consensus(·) is a function that aims to obtain the group consensus, and .α ∈ [0, 1] is a consensus threshold fixed a priori. We conclude that most of traditional MCC models fail to account for situations in which DMs can voluntarily sacrifice some trust to reduce their own adjustment costs in the context of social networks. To this end, this book provides several consensus models that consider the moderating effect of trust loss on the CRP.

2.4 Proposed Social Network Large-Scale Decision-Making Scenarios Recently, social network large-scale decision-making (SNLSDM) is attracting increasing attention in the field of decision science. Figure 2.1 shows two evolutionary paths of SNLSDM. It can be seen that SNLSDM is the expansion of LSDM in the social network environment or the expansion of SNGDM in the number of DMs. An SNLSDM problem is defined as a decision situation in which large-scale DMs participate and interact in a social network to achieve a common solution. Naturally, SNLSDM has the characteristics of LSDM and SNGDM as follows: • The decision organization consists of a large number of DMs (usually more than 20).

14

2 Preliminary Knowledge

Social networks

Social network group decision-making

Group decisionmaking large-scale DMs

Social network largescale decision-making Large-scale decisionmaking

Fig. 2.1 The evolutionary paths of SNLSDM

• Individual opinions tend to diverge and conflict. Therefore, it is necessary to implement a CRP, through which individual opinions continuously move towards the group opinion. • The DMs are linked together by a social network through which individual opinions are influenced by each other. Each of these characteristics has promoted the study of SNLSDM. For example, the first feature leads to clustering as a key step in solving SNLSDM, by which a large group is divided into several smaller groups. Doing so can achieve the dimensionality reduction for large-scale DMs and solve the scalability problem of LSDM. The second feature highlights the importance of the CRP in dealing with SNLSDM, while the third feature calls for exploring the role of social relationships in solving SNLSDM. A typical SNLSDM configuration contains two aspects of decision information: (1) Evaluation information. DMs are required to provide evaluation regarding a set of alternatives with respect to a set of attributes. (2) Social relationships. DMs provide their trust degree in each other. By gathering individual trust degrees, a trust sociomatrix is obtained. Some significant achievements have been made in SNLSDM research. Lu et al. [105] applied a minimum cost model based on robust optimization to address the robust optimization consensus problem. Gai et al. [27] proposed a joint feedback strategy based on harmony degree to help the multiple non-consensus DMs adjust their preferences. Du et al. [49] developed a trust-similarity analysis-based clustering method that uses the opinion similarity and social relationship as the measurement attributes for clustering. This book will mainly focus on the clustering analysis and consensus building of SNLSDM, and propose the corresponding models and methods. In the aspect of clustering analysis, we will design several clustering algorithms involving multiple measurement attributes. In terms of consensus building, we will propose consensus-reaching models that analyze the conditions under which trust loss is used and explore the moderating effect of trust loss on the CRP.

References

15

References 1. Du, Z., Yu, S., & Chen, Z. (2022). Enhanced minimum-cost conflict risk mitigation-based FMEA for risk assessment in a probabilistic linguistic context. Computers & Industrial Engineering, 174, 108789. 2. Hu, Z., & Lin, J. (2022). An integrated multicriteria group decision making methodology for property concealment risk assessment under Z-number environment. Expert Systems with Applications, 205, 117369. 3. Tao, Z., Liu, X., Chen, H., Liu, J., & Guan, F. (2020). Linguistic Z-number fuzzy soft sets and its application on multiple attribute group decision making problems. International Journal of Intelligent Systems, 35(1), 105–124. 4. Xiao, J., Wang, X., & Zhang, H. (2020). Managing personalized individual semantics and consensus in linguistic distribution large-scale group decision making. Information Fusion, 53, 20–34. 5. Zhang, Z., Guo, C., & Martínez, L. (2016). Managing multigranular linguistic distribution assessments in large-scale multiattribute group decision making. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(11), 3063–3076. 6. Chao, X., Kou, G., Peng, Y., & Viedma, E. H. (2021). Large-scale group decision-making with non-cooperative behaviors and heterogeneous preferences: An application in financial inclusion. European Journal of Operational Research, 288(1), 271–293. 7. Chen, X., Zhang, W., Xu, X., & Cao, W. (2022). A public and large-scale expert information fusion method and its application: Mining public opinion via sentiment analysis and measuring public dynamic reliability. Information Fusion, 78, 71–85. 8. Labella, Á., Liu, Y., Rodríguez, R. M., & Martínez, L. (2018). Analyzing the performance of classical consensus models in large scale group decision making: A comparative study. Applied Soft Computing, 67, 677–690. 9. Tang, M., Liao, H., Xu, J., Streimikiene, D., & Zheng, X. (2020). Adaptive consensus reaching process with hybrid strategies for large-scale group decision making. European Journal of Operational Research, 282(3), 957–971. 10. Xu, X. H., Du, Z. J., Chen, X. H., & Cai, C. G. (2019). Confidence consensus-based model for large-scale group decision making: A novel approach to managing non-cooperative behaviors. Information Sciences, 477, 410–427. 11. Yu, S. M., Du, Z. J., & Zhang, X. Y. (2022). Clustering analysis and punishment-driven consensus-reaching process for probabilistic linguistic large-group decision-making with application to car-sharing platform selection. International Transactions in Operational Research, 29(3), 2002–2029. 12. Dong, Y., Zha, Q., Zhang, H., & Herrera, F. (2020). Consensus reaching and strategic manipulation in group decision making with trust relationships. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 51(10), 6304–6318. 13. Liu, X., Xu, Y., Montes, R., & Herrera, F. (2019). Social network group decision making: Managing self-confidence-based consensus model with the dynamic importance degree of experts and trust-based feedback mechanism. Information Sciences, 505, 215–232. 14. Ureña, R., Kou, G., Dong, Y., Chiclana, F., & Herrera-Viedma, E. (2019). A review on trust propagation and opinion dynamics in social networks and group decision making frameworks. Information Sciences, 478, 461–475. 15. Zhang, Z., Gao, Y., & Li, Z. (2020). Consensus reaching for social network group decision making by considering leadership and bounded confidence. Knowledge-Based Systems, 204, 106240. 16. Biswas, P., Pramanik, S., & Giri, B. C. (2016). TOPSIS method for multi-attribute group decision-making under single-valued neutrosophic environment. Neural computing and Applications, 27(3), 727–737. 17. Feng, B., & Lai, F. (2014). Multi-attribute group decision making with aspirations: A case study. Omega, 44, 136–147.

16

2 Preliminary Knowledge

18. Jiang, H., & Hu, B. Q. (2021). A novel three-way group investment decision model under intuitionistic fuzzy multi-attribute group decision-making environment. Information Sciences, 569, 557–581. 19. Lin, M., Xu, Z., Zhai, Y., & Yao, Z. (2018). Multi-attribute group decision-making under probabilistic uncertain linguistic environment. Journal of the Operational Research Society, 69, 157–170. 20. Hadas, Y., & Nahum, O. E. (2016). Urban bus network of priority lanes: A combined multiobjective, multi-criteria and group decision-making approach. Transport Policy, 52, 186–196. 21. Singh, R. K., Choudhury, A. K., Tiwari, M. K., & Shankar, R. (2007). Improved Decision Neural Network (IDNN) based consensus method to solve a multi-objective group decision making problem. Advanced Engineering Informatics, 21(3), 335–348. 22. Mousavi, S. M., Jolai, F., & Tavakkoli-Moghaddam, R. (2013). A fuzzy stochastic multiattribute group decision-making approach for selection problems. Group Decision and Negotiation, 22(2), 207–233. 23. Wang, Y. M., & Elhag, T. M. (2007). A fuzzy group decision making approach for bridge risk assessment. Computers & Industrial Engineering, 53(1), 137–148. 24. Wang, Z., & Wang, Y. (2020). Prospect theory-based group decision-making with stochastic uncertainty and 2-tuple aspirations under linguistic assessments. Information Fusion, 56, 81–92. 25. Chen, Z., & Yang, W. (2011). A new multiple attribute group decision making method in intuitionistic fuzzy setting. Applied Mathematical Modelling, 35(9), 4424–4437. 26. Du, Z. J., Yu, S. M., Luo, H. Y., & Lin, X. D. (2021). Consensus convergence in large-group social network environment: Coordination between trust relationship and opinion similarity. Knowledge-Based Systems, 217, 106828. 27. Gai, T., Cao, M., Cao, Q., Wu, J., Yu, G., & Zhou, M. (2020). A joint feedback strategy for consensus in large-scale group decision making under social network. Computers & Industrial Engineering, 147, 106626. 28. Tian, Z. P., Nie, R. X., & Wang, J. Q. (2019). Social network analysis-based consensussupporting framework for large-scale group decision-making with incomplete interval type-2 fuzzy information. Information Sciences, 502, 446–471. 29. Zhang, Z., Yu, W., Martínez, L., & Gao, Y. (2019). Managing multigranular unbalanced hesitant fuzzy linguistic information in multiattribute large-scale group decision making: A linguistic distribution-based approach. IEEE Transactions on Fuzzy Systems, 28(11), 2875– 2889. 30. Yu, S. M., Du, Z. J., Zhang, X., Luo, H., & Lin, X. (2022). Trust Cop-Kmeans clustering analysis and minimum-cost consensus model considering voluntary trust loss in social network large-scale decision-making. IEEE Transactions on Fuzzy Systems, 30(7), 2634–2648. 31. Zhong, X., & Xu, X. (2020). Clustering-based method for large group decision making with hesitant fuzzy linguistic information: Integrating correlation and consensus. Applied Soft Computing, 87, 105973. 32. Xu, X. H., Du, Z. J., & Chen, X. H. (2015). Consensus model for multi-criteria large-group emergency decision making considering non-cooperative behaviors and minority opinions. Decision Support Systems, 79, 150–160. 33. Xu, X. H., Du, Z. J., Chen, X. H., & Zhou, Y. J. (2017). Conflict large-group emergency decision-making method while protecting minority opinions. Journal of Management Sciences in China, 20(11), 10–23. 34. Palomares, I., Martínez, L., & Herrera, F. (2014). A consensus model to detect and manage noncooperative behaviors in large-scale group decision making. IEEE Transactions on Fuzzy Systems, 22(3), 516–530. 35. Wu, T., Liu, X., & Liu, F. (2018). An interval type-2 fuzzy TOPSIS model for large scale group decision making problems with social network information. Information Sciences, 432, 392–410. 36. Wu, Z., & Xu, J. (2018). A consensus model for large-scale group decision making with hesitant fuzzy information and changeable clusters. Information Fusion, 41, 217–231.

References

17

37. Rodríguez, R. M., Labella, Á., De Tré, G., & Martínez, L. (2018). A large scale consensus reaching process managing group hesitation. Knowledge-Based Systems, 159, 86–97. 38. Yu, S. M., Du, Z. J., Zhang, X. Y., Luo, H. Y., & Lin, X. D. (2021). Punishment-driven consensus reaching model in social network large-scale decision-making with application to social capital selection. Applied Soft Computing, 113, 107912. 39. Zha, Q., Liang, H., Kou, G., Dong, Y., & Yu, S. (2019). A feedback mechanism with bounded confidence-based optimization approach for consensus reaching in multiple attribute largescale group decision-making. IEEE Transactions on Computational Social Systems, 6(5), 994–1006. 40. Banerjee, S., Bhattacharyya, S., & Bose, I. (2017). Whose online reviews to trust? Understanding reviewer trustworthiness and its impact on business. Decision Support Systems, 96, 17–26. 41. Yu, S. M., Wang, J., & Wang, J. Q. (2017). An interval type-2 fuzzy likelihood-based MABAC approach and its application in selecting hotels on a tourism website. International Journal of Fuzzy Systems, 19(1), 47–61. 42. Yu, S. M., Wang, J., Wang, J. Q., & Li, L. (2018). A multi-criteria decision-making model for hotel selection with linguistic distribution assessments. Applied Soft Computing, 67, 741–755. 43. Deng, S., Huang, L., Xu, G., Wu, X., & Wu, Z. (2016). On deep learning for trust-aware recommendations in social networks. IEEE Transactions on Neural Networks and Learning Systems, 28(5), 1164–1177. 44. Eirinaki, M., Louta, M. D., & Varlamis, I. (2013). A trust-aware system for personalized user recommendations in social networks. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 44(4), 409–421. 45. Dong, Y., Ding, Z., Martínez, L., & Herrera, F. (2017). Managing consensus based on leadership in opinion dynamics. Information Sciences, 397, 187–205. 46. Dong, Y., Zha, Q., Zhang, H., Kou, G., Fujita, H., Chiclana, F., & Herrera-Viedma, E. (2018). Consensus reaching in social network group decision making: Research paradigms and challenges. Knowledge-Based Systems, 162, 3–13. 47. Jin, F., Cao, M., Liu, J., Martínez, L., & Chen, H. (2021). Consistency and trust relationshipdriven social network group decision-making method with probabilistic linguistic information. Applied Soft Computing, 103, 107170. 48. Wu, J., Dai, L. F., Chiclana, F., Fujita, H., & Herrera-Viedma, E. (2018). A minimum adjustment cost feedback mechanism based consensus model for group decision making under social network with distributed linguistic trust. Information Fusion, 41, 232–242. 49. Du, Z. J., Luo, H. Y., Lin, X. D., & Yu, S. M. (2020). A trust-similarity analysis-based clustering method for large-scale group decision-making under a social network. Information Fusion, 63, 13–29. 50. Wu, J., Chiclana, F., & Herrera-Viedma, E. (2015). Trust based consensus model for social network in an incomplete linguistic information context. Applied Soft Computing, 35, 827– 839. 51. Wu, J., Chiclana, F., Fujita, H., & Herrera-Viedma, E. (2017). A visual interaction consensus model for social network group decision making with trust propagation. Knowledge-Based Systems, 122, 39–50. 52. Liu, B., Zhou, Q., Ding, R. X., Palomares, I., & Herrera, F. (2019). Large-scale group decision making model based on social network analysis: Trust relationship-based conflict detection and elimination. European Journal of Operational Research, 275(2), 737–754. 53. Zhang, H., Palomares, I., Dong, Y., & Wang, W. (2018). Managing non-cooperative behaviors in consensus-based multiple attribute group decision making: An approach based on social network analysis. Knowledge-Based Systems, 162, 29–45. 54. Cabrerizo, F. J., Pérez, I. J., Chiclana, F., Herrera-Viedma, E. (2017). Group decision making: Consensus approaches based on soft consensus measures. In Fuzzy Sets, Rough Sets, Multisets and Clustering (pp. 307–321). Springer, Cham. 55. Herrera-Viedma, E., Cabrerizo, F. J., Kacprzyk, J., & Pedrycz, W. (2014). A review of soft consensus models in a fuzzy environment. Information Fusion, 17, 4–13.

18

2 Preliminary Knowledge

56. Kacprzyk, J., & Fedrizzi, M. (1988). A “soft” measure of consensus in the setting of partial (fuzzy) preferences. European Journal of Operational Research, 34(3), 316–325. 57. Zhang, X., Wang, X., Yu, S., Wang, J., & Wang, T. (2018). Location selection of offshore wind power station by consensus decision framework using picture fuzzy modelling. Journal of Cleaner Production, 202, 980–992. 58. Du, Z. J., Yu, S. M., & Xu, X. H. (2020). Managing noncooperative behaviors in largescale group decision-making: Integration of independent and supervised consensus-reaching models. Information Sciences, 531, 119–138. 59. Wu, Q., Liu, X., Qin, J., & Zhou, L. (2021). Multi-criteria group decision-making for portfolio allocation with consensus reaching process under interval type-2 fuzzy environment. Information Sciences, 570, 668–688. 60. Wu, Z., & Xu, J. (2016). Managing consistency and consensus in group decision making with hesitant fuzzy linguistic preference relations. Omega, 65, 28–40. 61. Du, Z. J., Chen, Z. X., & Yu, S. M. (2021). Improved failure mode and effect analysis: Implementing risk assessment and conflict risk mitigation with probabilistic linguistic information. Mathematics, 9(11), 1266. 62. Gou, X., Xu, Z., Liao, H., & Herrera, F. (2020). Consensus model handling minority opinions and noncooperative behaviors in large-scale group decision-making under double hierarchy linguistic preference relations. IEEE Transactions on Cybernetics, 51(1), 283–296. 63. Wang, P., Xu, X., Huang, S., & Cai, C. (2018). A linguistic large group decision making method based on the cloud model. IEEE Transactions on Fuzzy Systems, 26(6), 3314–3326. 64. Xie, W., Ren, Z., Xu, Z., & Wang, H. (2018). The consensus of probabilistic uncertain linguistic preference relations and the application on the virtual reality industry. Knowledge-Based Systems, 162, 14–28. 65. Gong, Z., Guo, W., Herrera-Viedma, E., Gong, Z., & Wei, G. (2020). Consistency and consensus modeling of linear uncertain preference relations. European Journal of Operational Research, 283(1), 290–307. 66. Liu, N., He, Y., & Xu, Z. (2019). A new approach to deal with consistency and consensus issues for hesitant fuzzy linguistic preference relations. Applied Soft Computing, 76, 400–415. 67. Xu, Y., Wen, X., Sun, H., & Wang, H. (2018). Consistency and consensus models with local adjustment strategy for hesitant fuzzy linguistic preference relations. International Journal of Fuzzy Systems, 20(7), 2216–2233. 68. Zhang, G., Dong, Y., & Xu, Y. (2014). Consistency and consensus measures for linguistic preference relations based on distribution assessments. Information Fusion, 17, 46–55. 69. Du, Z. J., Yu, S. M., Cai, C. G. (2023). Constrained community detection and multi-stage multi-cost consensus in social network large-scale decision-making. IEEE Transactions on Computational Social Systems. https://doi.org/10.1109/TCSS.2023.3265701 70. Li, C. C., Dong, Y., & Herrera, F. (2018). A consensus model for large-scale linguistic group decision making with a feedback recommendation based on clustered personalized individual semantics and opposing consensus groups. IEEE Transactions on Fuzzy Systems, 27(2), 221– 233. 71. Liu, F., Zhang, J., & Liu, T. (2020). A PSO-algorithm-based consensus model with the application to large-scale group decision-making. Complex & Intelligent Systems, 6(2), 287–298. 72. Liu, P., Zhang, K., Wang, P., & Wang, F. (2022). A clustering-and maximum consensusbased model for social network large-scale group decision making with linguistic distribution. Information Sciences, 602, 269–297. 73. Yu, S. M., Du, Z. J. (2022). Large-scale group decision-making: State-to-the-art clustering and consensus paths. Singapore: Springer. https://doi.org/10.1007/978-981-16-7889-9 74. Cheng, D., Cheng, F., Zhou, Z., & Wu, Y. (2020). Reaching a minimum adjustment consensus in social network group decision-making. Information Fusion, 59, 30–43. 75. Kamis, N. H., Chiclana, F., & Levesley, J. (2019). An influence-driven feedback system for preference similarity network clustering based consensus group decision making model. Information Fusion, 52, 257–267.

References

19

76. Wu, J., Cao, M., Chiclana, F., Dong, Y., & Herrera-Viedma, E. (2020). An optimal feedback model to prevent manipulation behavior in consensus under social network group decision making. IEEE Transactions on Fuzzy Systems, 29(7), 1750–1763. 77. Wu, J., Chang, J., Cao, Q., & Liang, C. (2019). A trust propagation and collaborative filtering based method for incomplete information in social network group decision making with type-2 linguistic trust. Computers & Industrial Engineering, 127, 853–864. 78. Dong, Y., Zhang, H., & Herrera-Viedma, E. (2016). Integrating experts’ weights generated dynamically into the consensus reaching process and its applications in managing noncooperative behaviors. Decision Support Systems, 84, 1–15. 79. Wu, J., Sun, Q., Fujita, H., & Chiclana, F. (2019). An attitudinal consensus degree to control the feedback mechanism in group decision making with different adjustment cost. KnowledgeBased Systems, 164, 265–273. 80. Xu, W., Chen, X., Dong, Y., & Chiclana, F. (2021). Impact of decision rules and noncooperative behaviors on minimum consensus cost in group decision making. Group Decision and Negotiation, 30(6), 1239–1260. 81. Zhang, C., Zhao, M., Zhao, L., & Yuan, Q. (2021). A consensus model for large-scale group decision-making based on the trust relationship considering leadership behaviors and noncooperative behaviors. Group Decision and Negotiation, 30(3), 553–586. 82. Yu, S. M., Zhang, X. T., & Du, Z. J. (2023). Enhanced minimum-cost consensus: Focusing on over adjustment and flexible consensus cost. Information Fusion, 89, 336–354. 83. Yuan, Y., Cheng, D., & Zhou, Z. (2021). A minimum adjustment consensus framework with compromise limits for social network group decision making under incomplete information. Information Sciences, 549, 249–268. 84. Zhang, H., Zhao, S., Kou, G., Li, C. C., Dong, Y., & Herrera, F. (2020). An overview on feedback mechanisms with minimum adjustment or cost in consensus reaching in group decision making: Research paradigms and challenges. Information Fusion, 60, 65–79. 85. Zhang, H., Kou, G., & Peng, Y. (2019). Soft consensus cost models for group decision making and economic interpretations. European Journal of Operational Research, 277(3), 964–980. 86. Zhong, X., Xu, X., & Pan, B. (2022). A non-threshold consensus model based on the minimum cost and maximum consensus-increasing for multi-attribute large group decision-making. Information Fusion, 77, 90–106. 87. Alonso, S., Herrera-Viedma, E., Chiclana, F., & Herrera, F. (2010). A web based consensus support system for group decision making problems and incomplete preferences. Information Sciences, 180(23), 4477–4495. 88. Giordano, R., Passarella, G., Uricchio, V. F., & Vurro, M. (2007). Integrating conflict analysis and consensus reaching in a decision support system for water resource management. Journal of Environmental Management, 84(2), 213–228. 89. Herrera-Viedma, E., Martínez, L., Mata, F., & Chiclana, F. (2005). A consensus support system model for group decision-making problems with multigranular linguistic preference relations. IEEE Transactions on fuzzy Systems, 13(5), 644–658. 90. Altuzarra, A., Moreno-Jiménez, J. M., & Salvador, M. (2010). Consensus building in AHPgroup decision making: A Bayesian approach. Operations Research, 58(6), 1755–1773. 91. Chen, X., Zhang, W., Xu, X., & Cao, W. (2022). Managing group confidence and consensus in intuitionistic fuzzy large group decision-making based on social media data mining. Group Decision and Negotiation, 31, 995–1023. 92. Yang, C., Gu, W., Ito, T., & Yang, X. (2021). Machine learning-based consensus decisionmaking support for crowd-scale deliberation. Applied Intelligence, 51(7), 4762–4773. 93. Li, X., Liao, H., & Wen, Z. (2021). A consensus model to manage the non-cooperative behaviors of individuals in uncertain group decision making problems during the COVID-19 outbreak. Applied Soft Computing, 99, 106879. 94. Wan, S. P., Yan, J., & Dong, J. Y. (2022). Personalized individual semantics based consensus reaching process for large-scale group decision making with probabilistic linguistic preference relations and application to COVID-19 surveillance. Expert Systems with Applications, 191, 116328.

20

2 Preliminary Knowledge

95. Xu, Y., Wen, X., & Zhang, W. (2018). A two-stage consensus method for large-scale multiattribute group decision making with an application to earthquake shelter selection. Computers & Industrial Engineering, 116, 113–129. 96. Yu, S., Du, Z., & Xu, X. (2021). Hierarchical punishment-driven consensus model for probabilistic linguistic large-group decision making with application to global supplier selection. Group Decision and Negotiation, 30(6), 1343–1372. 97. Zhang, Z. X., Hao, W. N., Yu, X. H., Chen, J. Y., & Xu, Y. W. (2019). A Bayesian approach to incomplete fuzzy reciprocal preference relations in consensus reaching process and its application in project performance evaluations. Journal of Intelligent & Fuzzy Systems, 37(1), 1415–1434. 98. Zhu, Y., Fan, C., Xiao, J., & Liu, S. (2021). Integrating a prospect theory-based consensusreaching process into large-scale quality function deployment and its application in the evaluation of contingency plan. Journal of Intelligent & Fuzzy Systems, 41(1), 575–594. 99. Ben-Arieh, D., & Easton, T. (2007). Multi-criteria group consensus under linear cost opinion elasticity. Decision Support Systems, 43(3), 713–721. 100. Dong, Y., Xu, Y., Li, H., & Feng, B. (2010). The OWA-based consensus operator under linguistic representation models using position indexes. European Journal of Operational Research, 203(2), 455–463. 101. Zhang, G., Dong, Y., Xu, Y., & Li, H. (2011). Minimum-cost consensus models under aggregation operators. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 41(6), 1253–1261. 102. Gong, Z., Zhang, H., Forrest, J., Li, L., & Xu, X. (2015). Two consensus models based on the minimum cost and maximum return regarding either all individuals or one individual. European Journal of Operational Research, 240(1), 183–192. 103. Zhang, B., Dong, Y., Zhang, H., & Pedrycz, W. (2020). Consensus mechanism with maximumreturn modifications and minimum-cost feedback: A perspective of game theory. European Journal of Operational Research, 287(2), 546–559. 104. Labella, Á., Liu, H., Rodríguez, R. M., & Martínez, L. (2020). A cost consensus metric for consensus reaching processes based on a comprehensive minimum cost model. European Journal of Operational Research, 281(2), 316–331. 105. Lu, Y., Xu, Y., Herrera-Viedma, E., & Han, Y. (2021). Consensus of large-scale group decision making in social network: the minimum cost model based on robust optimization. Information Sciences, 547, 910–930.

Chapter 3

Trust and Behavior Analysis-Based Structure-Heterogeneous Information Fusion

Abstract Structure-heterogeneous evaluation information refers to a decision situation where decision-makers (DMs) use their individual sets of attributes to evaluate individual alternatives. This information situation generally exists in social network large-scale decision-making because of the differences in knowledge and experience of DMs and the complexity of the decision-making problem itself. To address this type of heterogeneous evaluation information, this chapter proposes a trust and behavior analysis-based fusion method, which makes full use of the trust relationships between DMs. First, we analyze the behaviors of DMs in choosing alternatives and attributes, and classify them into three categories: empty, positive, and negative. Then, distance measures among structure-heterogeneous evaluation information belonging to different categories of selection behaviors are defined and guided by trust values derived from the trust relationships among DMs. Continuing the efforts above, a complement method designed to populate positions that are not assigned values is developed. To ensure the integrity of the fusion method, some extreme evaluation information scenarios are discussed. Keywords Structure-heterogeneous evaluation information · Trust and behavior analysis-based fusion method (TBA-based fusion method) · Selection behavior · Trust relationship · Individual attribute · Individual alternative

3.1 Research Background and Problem Configuration Classical group decision-making (GDM) generally goes through such a situation in which a group of decision-makers (DMs) provide their own opinions/preferences on a given set of alternatives according to a fixed set of attributes, and then select the optimal alternative(s) based on the aggregation of individual opinions/preferences [1–6]. It can be found that a GDM problem usually contains three types of core elements: DMs, alternatives, and attributes [7–9]. Different treatments of any of these types of elements may lead to the heterogeneity in the evaluation information obtained. To date, the research on heterogeneous GDM has fallen into two main areas: structural heterogeneity and expression heterogeneity. Structure-heterogeneous eval© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Z. Du and S. Yu, Social Network Large-Scale Decision-Making, Uncertainty and Operations Research, https://doi.org/10.1007/978-981-99-7794-9_3

21

22

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

Table 3.1 Classification of the research on heterogeneous evaluation information Classification Expressionheterogeneous

Typical articles Year of publication Palomares et al. 2013 [10] Chen et al. [11] 2015

Zhang et al. [12]

2015

Zhang et al. [13] Li et al. [14]

2015

Zhang et al. [15]

2016

2019

Liang et al. [16] 2020

Zhang et al. [17] Kou et al. [18]

Structureheterogeneous

2021 2021

Dong et al. [19] 2016 Lourenzutti and 2016 Krohling [20]

Description Numerical domain, interval-valued domain, and linguistic domain Utility values, preference orderings, numerical preference relations, linguistic preference relations Real numbers, interval numbers, linguistic variables, intuitionistic fuzzy numbers, hesitant fuzzy elements, and hesitant fuzzy linguistic term sets Ordinal, interval, fuzzy number, linguistic, intuitionistic fuzzy set, and real number Real numbers, interval numbers, triangular fuzzy numbers, trapezoidal fuzzy numbers Utility values, preference orderings, multiplicative preference relations and additive preference relations Real numbers, interval numbers, trapezoidal fuzzy numbers, intuitionistic fuzzy numbers, and hesitant fuzzy numbers Interval numbers, triangular fuzzy numbers, and trapezoidal fuzzy numbers Utility values, preference orderings, and (incomplete) multiplicative preference relations and (incomplete) fuzzy preference relations Individual sets of attributes and alternatives Individual sets of attributes

uation information is generated when DMs select different alternative sets or/and attribute sets, while the expression-heterogeneous evaluation information is due to DMs providing the representations of attribute values. Table 3.1 presents the classification of heterogeneous evaluation information. Due to the differences in knowledge and experience among DMs or the nature of the decision problem itself, DMs tend to use different sets of attributes, which we call individual sets of attributes in this chapter, to evaluate what they choose from a set of predefined alternatives [19]. Structure-heterogeneous MAGDM (SHMAGDM) is defined as a decision situation where DMs evaluate their individual sets of alternatives with respect to their individual sets of attributes. In this matter, the individual sets of attributes and alternatives may be heterogeneous. Lourenzuttia and

3.1 Research Background and Problem Configuration

23

Krohling [20] illustrated the usage scenarios of SH evaluation information through a case study on the supplier evaluation. A brief description of the case is as follows. A company plans to re-evaluate the existing suppliers that it works closely with. First of all, the company’s board seeks the support from the managers of three core departments. Since the managers work in different buildings, getting them together to produce a unified set of evaluation attributes is a challenge. As a result, the board allows each manager to conduct an independent opinion based on the attributes they deem appropriate. In this case, the managers are more likely have their own set of attributes because their priorities are different. To address such an SH-MAGDM problem, Lourenzutti and Krohling [20] proposed a generalized Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method, which allows DMs to independently define their respective attribute sets. Dong et al. [19] designed a consensus framework by which DMs could not only adjust their opinions but also update their individual sets of attributes and alternatives. To the best of our knowledge, social networks, as a valuable resource for decision making, have not been introduced into the processing of structure-heterogeneous evaluation information. With the development of communication technology and social media, social network relationships provide a new solution for handling SH evaluation information. Humans are social animals and, as such, individuals’ decisions are usually influenced by those with whom they are close or have social network connections [21–23]. A positive social relationship can be described as a trust relationship between two DMs, which makes one DM vulnerable to the influence of another DM he/she trusts. Trust is regarded as a reliable and direct social relationship. Some research analyzed its role in the decision-making process, such as assigning importance weights to DMs [24, 25], promoting interactions and consensus [26–30], providing comments [31– 33], and guiding the classification of DMs [21, 34–37]. In addition to the influence of trust on decision-making described above, we argue that trust can be a reliable resource for dealing with distance measurements and the fusion of SH evaluation information. Previous studies have made important contributions to heterogeneous evaluation information and have reported meaningful achievements. Nevertheless, the following issues are considered to require further attention: (1) Existing studies mainly use the methods based on distances to the ideal solution, or implement a consensus-reaching process to deal with SH evaluation information. We consider that it is a priority to investigate why heterogeneity exists in the selection of attributes and alternatives from the perspective of selection behavior. (2) Current studies on SH evaluation information fail to consider the influence of social networks on the decision-making process. Some studies show that DMs make decisions by relying on the opinions/suggestions of acquaintances/friends around them or others with similar interests in the same social network [38–41]. As a type of direct and positive social relationship, trust can be considered as an important means of dealing with heterogeneous evaluation information.

24

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

For SH evaluation information, this chapter proposes a trust and behavior analysisbased fusion method, which consists of four stages. First of all, we analyze why an attribute (or an alternative) is not selected for evaluation and conduct three categories of selection behaviors, i.e., Empty, Negative, and Positive. • In Stage 1, DMs are required to provide the trust values associated with different types of selection behaviors taken against others, which are then organized as a trust sociomatrix. • In Stage 2, distance measurements for heterogeneous evaluation information are defined. Among these, trust is used as an important auxiliary measure tool. • In Stage 3, a novel weight determination method for DMs is developed. • Stage 4 describes the algorithm of fusing SH heterogeneous information. An SH-MAGDM problem emphasizes that DMs use their individual attributes to evaluate their individual alternatives. This is formally proposed as follows. Let . X = {x 1 , x 2 , . . . , x m }(m ≥ 2) and . A = {a1 , a2 , . . . , an }(n ≥ 2) be an alternative pond and an attribute pond respectively, from which DMs . E = {e1 , e2 , . . . , eq } independently select their individual alternatives and attributes for evaluation according to their own perceptions. Let . X l = {x1l , x2l , . . . , xml } and . Al = {a1l , a2l , . . . , anl } be the individual sets of alternatives and attributes associated with DM .el . Each DM provides its individual evaluation information as.V l = (vil j )m×n , where.vil j is a crisp number, representing the evaluation value assigned to alternative .xi ∈ X with respect to attribute .a j ∈ A. For simplicity, the individual evaluation information is also known as the individual opinion in the following. Example 3.1 A family of three from Shenzhen, China, plans to take a trip during China’s National Day Golden Week. They have initially chosen five destinations in China, including Shanghai, Changsha, Xiamen, Chengdu, and Urumqi (labeled as . X = {x1 , x2 , x3 , x4 , x5 }). The family considers four evaluation attributes: price, distance, flow of people, and city awareness (labeled as . A = {a1 , a2 , a3 , a4 }). Father focuses on the two attributes of distance and city awareness and is unwilling to choose a destination that is too far; as a result, he excludes Chengdu and Urumqi. Mother is concerned about price and flow of people hesitates a destination between Changsha and Chengdu. This selection scenario can be described as the following decision-making problem. Given the alternative pond . X and attribute pond . A. Let . X Father = {x1Father , x2Father , x3Father } and . AFather = {a1Father , a2Father } be the individual sets of alternatives and attributes provided by the father. He provides the individ⎛ ⎞ 0.3 0.5 ual opinion as .V Father = ⎝ 0.6 0.4 ⎠ . Similarly, the mother’s opinion is obtained as 0.3 0.2 ( ) 0.4 0.6 Mather .V = . Since the child is so young, she is completely at the mercy 0.2 0.7 of her parents.

3.2 Analysis of the Selection Behaviors of Attributes and Alternatives

25

3.2 Analysis of the Selection Behaviors of Attributes and Alternatives Consider a decision situation in which the DMs can choose attributes and alternatives freely and autonomously. This means that no one DM is forced to choose all attributes. Some DMs select only those attributes in the attribute pond that they consider appropriate for evaluation rather than all attributes, resulting in the heterogeneity of attributes. The same can happen in the selection of alternatives. Therefore, the following analysis regarding the selection of attributes can also be applied to the alternatives. There are usually two ways for a DM to choose attributes: ‘yes’ and ‘no’. The selection ‘yes’ means that the DM selects the attribute, while ‘no’ represents that the attribute is not selected. In this chapter, we stipulate that if a DM selects an attribute, it means that he/she thinks the attribute is relevant to the decision-making problem, and more importantly that he/she can make a judgment about that attribute. On the other hand, a DM not choosing an attribute can be divided into two cases: (i) the DM knows too little about the attribute to evaluate it, and (ii) the DM considers that the attribute is irrelevant to the decision-making event and should be removed. Therefore, we classify the selection behaviors of attributes into three cases: Empty, Negative, and Positive (see Table 3.2). Figure 3.1 presents the relationships among the three cases of selection behaviors regarding attributes, which consists of a gray board, three containers of different colors, and some black spheres. These three containers are used to store the attributes that belong to different cases. Specifically, the attributes in the blue cylinder container belong to Case Empty, the attributes in the red cuboid container belong to Case Negative, and the attributes in the green cuboid container belong to Case Positive. Initially, all the attributes (namely the black spheres in Fig. 3.1) are randomly placed on the gray board. Then, they are divided into different containers according to DMs’ selection behaviors. Two observations can be made: (i) Cases Negative and Positive

Table 3.2 Classification of DMs’ selection behaviors regarding an attribute Description Classification Sub-classification No

Empty

Negative

Yes

Positive

The selection behavior: The attribute is not selected for evaluation by a DM The reason: The DM knows too little about the attribute and therefore cannot evaluate it with respect to the alternatives The selection behavior: The attribute is removed The reason: The DM thinks the attribute is an unimportant aspect for the decision The selection behavior: The DM argues that the attribute is relevant to the decision-making problem and should be selected for evaluation. The DM can evaluate it

26

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

Fig. 3.1 Relationship between three cases of selection behaviors of attributes

represent two completely different views of the selection of attributes; therefore, the red and green containers are located at opposite ends of the gray board. (ii) Case Empty represents the hesitation in selecting an attribute in selecting an attribute, so the blue container is a cylinder that rolls between cases Negative and Positive. Definition 3.1 Consider an SH-MAGDM problem with individual sets of attributes and alternatives. The case determination of attribute .a j can be characterized by the mapping function .γl : a j → Case, where .Case is the case that .a j belongs to in the view of .el such that .Case ∈ {Empt y, N egative, Positive}. Similarly, the mapping function .γl : xi → Case is used to determine which case .xi belongs to. Definition 3.2 A case determination of matrix element .vil j in .V l is characterized by the mapping function .[l : vil j → Case, where .Case is the case that .vil j belongs to in the view of .el . Which case .vil j belongs to depends entirely on which cases its corresponding attribute .a j and alternative .xi belong to. Thus, we obtain [l (vil j ) = [l (γl (a j ), γl (xi )) = [l (γl (xi ), γl (a j )).

.

(3.1)

We can conclude that in determining which case a matrix element belongs to, its associated attributes and alternatives have equal status. In this book, we assume that the selection behavior of any attribute (or alternative) belongs to one of the three types of selection behaviors mentioned above. The following propositions for determining which case matrix element .vil j belongs to are provided. Proposition 3.1 If the attribute and alternative associated with a matrix element belong to the same case, then the matrix element also belongs to that case.

3.2 Analysis of the Selection Behaviors of Attributes and Alternatives

27

Proof The proposition can be divided into the following three statements: • If .γl (a j ) = Empt y and .γl (xi ) = Empt y, then .[l (vil j ) = Empt y. • If .γl (a j ) = N egative and .γl (xi ) = N egative, then .[l (vil j ) = N egative. • If .γl (a j ) = Positive and .γl (xi ) = Positive, then .[l (vil j ) = Positive. If the associated attribute .a j and alternative .xi belong to Case Empty, it indicates that DM .el lacks knowledge of .a j and .xi , and thus, he/she cannot assign a value to matrix element .vil j . According to the statement of selection behaviors in Table 3.2, we have that .vil j belongs to Case Empty. The other two statements can also be proved in a similar way. This completes the proof of Proposition 3.1. ∎ Proposition 3.2 If the attribute or alternative associated with a matrix element belongs to Case Negative, then the matrix element belongs to Case Negative. Proof Without loss of generality, we set.γl (a j ) = N egative and.γl (xi ) /= N egative. Then, we need to prove that the following two statements are true: • If .γl (a j ) = N egative and .γl (xi ) = Empt y, then .[l (vil j ) = N egative. • If .γl (a j ) = N egative and .γl (xi ) = Positive, then .[l (vil j ) = N egative. First of all, since .a j belongs to Case Negative, .vil j cannot be assigned a value. This means that .vil j cannot be judged as belonging to Case Positive. Case Empty stems from a lack of understanding of an attribute, and thus, the attribute cannot be selected, while Case Negative indicates that the DM clearly recognizes that the attribute should be removed. In terms of information cognition, we set .[l (vil j ) = N egative. This completes the proof of Proposition 3.2. ∎ Proposition 3.2 states that Case Negative has the greatest priority in determining to which case a matrix element belongs. Proposition 3.3 If the associated attribute and alternative belong to Case Empty and Case Positive respectively, then the matrix element belongs to Case Empty. Proof The proposition is equivalent to the statement: • If .γl (a j ) = Empt y and .γl (xi ) = Positive, then .[l (vil j ) = Empt y. Since .a j belongs to Case Empty, .vil j cannot be assigned a value. This means that .vil j cannot be judged as belonging to Case Positive. Meanwhile, .vil j cannot be considered to belong to Case Negative because .el simply lacks knowledge of the attribute and cannot assert that it should be removed. Similarly, we have the statement: If .γl (xi ) = Empt y and .γl (a j ) = Positive, then .[l (vil j ) = Empt y. This completes the proof of Proposition 3.3. ∎

28

3 Trust and Behavior Analysis-Based Structure-Heterogeneous … Attribute

Alternative

E

E

Attribute

N

E

Alternative

N

E

N

Attribute

N Matrix element

P

P

Matrix element

Attribute

Alternative

P

N

Matrix element

Attribute

Alternative

Matrix element

Alternative

P

E

E Matrix element

Attribute

Alternative

N

P

N Matrix element

Fig. 3.2 Rules for determining to which case a matrix element is judged to belong

Proposition 3.3 illustrates that Case Empty has greater priority than Case Positive in determining to which case a matrix element belongs. Based on the above analysis, the following proposition can be concluded. Proposition 3.4 A matrix element can be judged to belong to Case Positive if and only if both its associated attribute and alternative belong to Case Positive. Proposition 3.4 is equivalent to the statement: • Only if .γl (a j ) = Postive and .γl (xi ) = Positive, then .[l (vil j ) = Positive. Clearly, Proposition 3.4 is true. Proposition 3.4 shows that Case Positive has the lowest priority in determining which case a matrix element belongs to. Figure 3.2 summarizes the rules for determining which case a matrix element belongs to. It can be concluded that in an SH-MAGDM problem, DMs are required to provide two aspects of decision information: • Individual selection behaviors with respect to attributes and alternatives, which present the distributions of all attributes and alternatives in the three cases, namely Empty, Negative, and Positive. • Incomplete individual opinions .V l = (vil j )m×n , .l = 1, 2, . . . , q. If .m(l) = m and l .n(l) = n, . V is regarded as a complete individual opinion. A comprehensive individual opinion is obtained, denoted as .C V l = (vil,Case )m×n , j l,Case is the value assigned to the position .(xi , a j ) and .Case is the case that where .vi j l,N l,P the related matrix element belongs to. We use .vil,E j , .vi j , .vi j to represent the values

3.3 Procedure of Trust and Behavior Analysis-Based Fusion Method

29

Table 3.3 Father’s selection behaviors regarding attributes and alternatives .x1

Case

.x2

.x3

.x4

.x5

.a1

.a2

.a3

.a4

Positive Positive Positive Negative Negative Negative Positive Negative Positive

of the positions and the associated cases they belong to (i.e., Empty, Negative, and not only contains the Positive, respectively). Compared with .vil j , the symbol .vil,Case j value of a matrix element but also indicates which selection behavior the matrix l,N element belongs to. We set .vil,E j = ∅, .vi j = ∅. For simplicity, the comprehensive opinion is referred to as the individual opinion in the following part of this chapter. Example 3.2 (Continuing Example 3.1). We obtain the father’s selection behavior regarding the attributes and alternatives (see Table 3.3). His opinion is described as ⎛

C V Father

.

∅N ⎜ ∅N ⎜ N =⎜ ⎜ ∅N ⎝∅ ∅N

0.3 P 0.6 P 0.3 P ∅N ∅N

∅N ∅N ∅N ∅N ∅N

⎞ 0.5 P 0.4 P ⎟ ⎟ 0.2 P ⎟ ⎟. ∅N ⎠ ∅N

3.3 Procedure of Trust and Behavior Analysis-Based Fusion Method Since DMs have their own selection behaviors for attributes and alternatives, it is inconvenient to directly fuse the evaluation information. Several studies point out that trust relationships between DMs play an important role in the decision-making process, such as assigning the weights of DMs [23, 25], guiding the classification of DMs [21, 35, 37], influencing the consensus-reaching process and opinion dynamics [26, 28, 29, 36], providing recommendations and reviews [32, 40–43], etc. In this section, we propose an SH information fusion method based on trust and behavior analysis. The method consists of four stages. • • • •

Stage 1. Constructing the trust sociomatrix, Stage 2. Calculating the distance between individual opinions, Stage 3. Determining the weights of the DMs, Stage 4. Fusing structure-heterogeneous individual opinions.

30

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

3.3.1 Constructing the Trust Sociomatrix This chapter adopts one type of social network, namely the trust social network in which DMs explicitly express their opinions as trust and distrust statements. Repeat that a complete trust sociomatrix on . E is obtained as .T SoM = (tsolh )q×q , in which .1 − tsolh represents the distrust degree that DM .el assigns to DM .eh . We use (Empty, Positive) to represent the situation where one DM thinks that the matrix element at a certain position belongs to Case Empty, while the other DM considers the matrix element at the same position to belong to Case Positive. Similar explanations can be used for other forms of (Case, Case). It is often inconvenient to directly calculate the distance between two matrix elements that belong to different selection behaviors. For example, it is difficult to calculate the distance between two matrix elements belonging to (Empty, Positive) using traditional distance formulas such as Manhattan distance, Euclidean distance, etc. This is because the former DM failed to assign a value to the position. This section considers that DMs can use the trust relationships among them to select appropriate references to assist in distance measurements. According to the selection behaviors and cognitive levels of DMs, the basic principles for selecting references are specified as follows. (1) No reference is needed to measure the distance between two DMs’ opinions on a position for which they have the same selection behavior. This means that the distance measure of two matrix elements belonging to (Empty, Empty), (Negative, Negative), or (Positive, Positive) can be implemented directly. In particular, h,E l,N h,N we stipulate that .d(vil,E j , vi j ) = 0 and .d(vi j , vi j ) = 0. (2) When calculating the distance of two DMs’ opinions belonging to (Empty, Positive) or (Negative, Positive), the DM judged to hold a selection behavior of Empty or Negative needs to take the reference selection. (3) When calculating the distance of (Empty, Negative) or (Empty, Positive), the DM judged to hold a selection behavior of Empty needs to take the reference selection. A DM is unable to assign a value to the position due to the lack of knowledge about the associated attributes or alternatives. This situation drives the DM to select the appropriate reference to help calculate the distance to others. (4) From the perspective of emphasizing the cognition of the decision-making problem, only the selection behavior with a clear understanding of a certain position can be used as a reference. In other words, the selection behavior of Negative or Positive can be used as a reference but that of Empty cannot. (5) Trust is used to manage the relative importance between the selected references. Based on the above principles, Table 3.4 shows all the cases for reference selection. When calculating the distance of (Case, Case), if a reference is needed, then the item “Needs references” is marked as “Yes”. The following describes the role of trust in measuring the distances among individual opinions and assigning weights to the selected references, which can be divided into three situations:

3.3 Procedure of Trust and Behavior Analysis-Based Fusion Method

31

Table 3.4 Summary of reference selection (Case, Case)

Need references

(Empty, Empty) (Empty, Negative) (Empty, Positive) (Negative, Empty) (Negative, Negative) (Negative, Positive) (Positive, Empty) (Positive, Negative) (Positive, Positive)

No Yes Yes Yes No Yes Yes Yes No

The one that needs to seek references – Empty Empty Empty – Negative Empty Negative –

h,N • (Empty, Negative). Suppose there are two individual opinions .vil,E with j and .vi j respect to the position .(xi , a j ). According to the third principle, when calculating h,N the distance between .vil,E j and .vi j , DM .el may seek computational references. .el has little knowledge of the related attribute or alternative, whereas .eh provides a clear judgment. In this situation, the distance measure can be divided into two at the trust parts according to .el ’ s trust in .eh . One part is that .el recognizes .vih,N j degree of .tsolh ; the other part is that .el distrusts .eh , and thus, .el needs to find other references to measure the distance from .eh . h,P • (Empty, Positive). Let .vil,E be two individual opinions with respect to j and .vi j the position .(xi , a j ). As .el holds the selection behavior of Empty for the matrix element at the position .(xi , a j ), .el can choose to trust .eh ’s opinion on the position at the level of .tsolh . For the distrust degree .1 − tsolh , .el can seek other references to measure the distance from .eh . h,P • (Negative, Positive). Let .vil,N j and .vi j be two individual opinions with respect to the position .(xi , a j ). They have opposite selection behaviors for the same position with each other. However, setting the distance between them to 1 is something inappropriate because Case Negative represents only one result, while Case Positive has multiple assignments to the position. Therefore, when measuring the h,P distance between .vil,N j and .vi j , if .el distrusts .eh , then the distance is set to 1. If .el trusts .eh ’s opinion at the level of .tsolh , .el can choose other references to measure the distance from .eh .

Figure 3.3 illustrates the logic of selecting references to assist in calculating the distance measures of heterogeneous evaluation information. Of particular concern is the calculation of the distance of (Negative,Positive). Even if the DM with the selection behavior of Negative trusts the DM with the selection behavior of Positive, the former DM should seek other computational references. This is because Case Negative and Case Positive represent two completely different selection behaviors. Trust relationships come from the accumulation of past interactions among DMs as well as from the authority, expertise, reputation, and familiarity, among other

32

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

(Empty, Negative)

(Empty, Positive)

(Negative, Positive)

E

E

N

Trust

N

Distrust

Reference

Trust

P

Distrust

Reference

Distrust

P

Trust

Reference

Fig. 3.3 Logical diagram of selecting references to assist in the distance measures of SH evaluation information

factors [21, 25, 44]. We consider that the trust degree assigned to the same DM may differ according to the selection behaviors. The following three types of trust socioE→N E→P )q×q , .T SoM E→P = (tsolh )q×q , matrices are presented: .T SoM E→N = (tsolh N →P E→N N →P E→P N →P = (tsolh )q×q , where .tsolh , tsolh , tsolh ∈ [0, 1], .l, h = and .T SoM E→N represents the trust degree from .el to .eh under the condition 1, 2, . . . , q. .tsolh l,Case ) = Empt y and .[h (vih,Case ) = N egative. The other two types of trust that .[l (vi j j sociomatrices can be similarly interpreted.

3.3.2 Calculating the Distance Between SH Evaluation Information Since the dimensions and measures of attributes are often different, attribute values should be normalized. The attributes are usually given by benefit or cost attributes )m×n [45–47]. By using the method in [8], the individual opinions .C V l = (vil,Case j l,Case l are transformed into normalized opinions .C R = (ri j )m×n , .l = 1, 2, . . . , q. It is worth noting that when a matrix element belongs to Case Empty or Case Negative, l,E l,N l,N it does not need to be normalized. Therefore, we let .ril,E j = vi j , .ri j = vi j . We specify the following calculation rules:

3.3 Procedure of Trust and Behavior Analysis-Based Fusion Method

33

) ) ( ( h,E h,N = 0 and .d ril,N = 0. This indicates that the distance of (1) .d ril,E j , ri j j , ri j (Empty,Empty) or (Negative,Negative) is defined as 0 because the DMs hold the same selection behavior and none of them assign a value to the position .(xi , a j ). (2) The distance of positive-positive can formulas, ) |by classical measure ( be calculated | l,P h,P l,P h,P | | = ri j − ri j [48]. such as the Manhattan distance .d ri j , ri j (3) As shown in Fig. 3.3, cases Empty and Negative mean that a certain position is not assigned to a value. In this case, it is necessary to select references to assist the distance calculation. Based on the above statement, we focus on the distance measures regarding the situations presented in Fig. 3.3 in the following. h,P N →P Definition 3.3 Given matrix elements .ril,N , j , .ri j , and trust sociomatrix . T SoM l,N h,P the distance between .ri j and .ri j can be defined as

⎧ ) ( ) ⎨ tso N →P · f N →P (r l→h,N →P ) + (1 − tso N →P ) · 1, q r l→h,N →P > 0 ( ij i j lh lh l,N h,P ) ( , = .d ri j , ri j →P ⎩ 1, =0 q ril→h,N j

(3.2) →P where .ril→h,N is the selected reference used to assist in calculating the value j ) ( h,P →P , r , and . f N →P (·) is a function of .ril→h,N meeting the condition of .d ril,N j ij j 0 ≤ f N →P (·) ≤ 1.

.

Since DMs.el and.eh have completely different selection behaviors for the position (xi , a j ), if there is no computational reference, we can set that distance to 1. However, N →P .el trusts .eh to a certain extent (i.e., .tsolh ), and thus, .el can choose some references N →P N →P suggests that .el agrees on a score of .tsolh to assist the distance measure. .tsolh that the position should be assigned. Thus, .el can select the DM that considers the position .(xi , a j ) to belong to Case Positive as the computational reference, such that .

)| ( ' ) } { ( h ' ,P | 0 ≤ q rihj ,P ≤ q − 2 r l→h,N →P = d rih,P j , ri j

. ij

'

(3.3)

where .eh ' ∈ E, .h ' /= h, .rihj ,P represents the evaluation value provided by .eh ' , and the excluded related position belongs to Case (Positive. ) .q − 2 indicates that' .el and .e(h are ) h ' ,P h ,P h ' ,P = 0, it is the number of .ri j . If .q ri j from the reference selection. .q ri j .eh . In this means that no DM considers the position to belong to Case Positive except ) ( h,P , r can be set to 1 because of the completely different case, the distance .d ril,N j ij selection behaviors between .el and .eh with respect to that position. To show the →P →P→P to .ril→h,N , meeting selected references more clearly, we change .ril→h,N j j l→h,N →P→P l→h,N →P = ri j . that .ri j Suppose there are multiple computational references. We use the function . f N →P to process them. Without loss of generality, by using the arithmetic mean operator, we have that

34

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

.

( ) →P→P = f N →P ril→h,N j



1 '

q(rihj ,P ) eh ,e ' ∈E,h ' /=h

) ( h,P d ril,N , r j ij

(3.4)

h

( ) →P→P ≤ 1. Clearly, .0 ≤ f N →P ril→h,N j h,P Property 3.1 Let.ril,N j ,.ri j be the same as described in Definition 3.3. We can obtain the following statements: ) ( ) ( h,P h,P l,N = d r (1) .d ril,N , r , r , j ij ij ij ) ( l,N h,P ≤ 1. (2) .0 ≤ d ri j , ri j

Property 3.1 can be easily proved. h,N E→N Definition 3.4 Given matrix elements .ril,E , j , .ri j , and trust sociomatrix . T SoM l,E h,N the distance between .ri j and .ri j can be defined as

⎧ ) ( ) ⎨ tso E→N · 0 + (1 − tso E→N ) · f E→N (r l→h,E→N ) q r l→h,E→N > 0 ( ij i j lh lh l,E h,N ) ( , = .d ri j , ri j ⎩0 =0 q ril→h,E→N j

(3.5) where .ril→h,E→N is the selected reference used to assist in calculating the distance j ) ( l,E h,N .d ri j , ri j , and . f E→N is the function of .ril→h,E→N . j According to the third and fourth principles,.el should select the DMs that consider the position to belong to Case Negative or Positive as the calculational references. ' Therefore, we consider that .el can select two types of references: (i) .rihj ,N , which shows that.el chooses .eh ' ’s opinion about the position as the reference, and.eh ' consid' ers the position to belong to Case Negative; (ii).rihj ,P , which indicates that the selected reference, i.e.,.eh ' ’s opinion about the position, holds that the position belongs to Case Positive. Specifically, we have )| ( ' ) } { ( h ' ,N | h ,N 0 ≤ q r ≤ q − 2 , r l→h,E→N →N = d rih,N , r j ij ij

(3.6)

( ' ) )| } { ( h ' ,P | h ,P 0 ≤ q r ≤ q − 2 r l→h,E→P→N = d rih,N , r j ij ij

(3.7)

. ij

. ij

→N where .eh ' ∈ E, .h ' /= h, .ril→h,E→N and .ril→h,E→P→N are the references that conj j sider the position.(xi , a j ) to belong to Case Negative and Positive, respectively. ) .q − 2 ( '

indicates that .el and .eh are excluded from the reference selection. .q rihj ,N is the ( ' ) ' ' number of .rihj ,N , and .q rihj ,P is the number of .rihj ,P . We see that .ril→h,E→N can be j ( ' ) l→h,E→N →N l→h,E→P→N h ,N =0 divided into two categories, namely.ri j and.ri j . If.q ri j ( ' ) and .q rihj ,P = 0, it means that no DM has a clear judgment on the assignment of

3.3 Procedure of Trust and Behavior Analysis-Based Fusion Method

35

) ( h,N position .(xi , a j ) except .eh . In this case, the distance .d ril,E can be set to 0, j , ri j as there are no references. Equation (3.5) can be improved to the following: ⎧ E→N · 0 + (1 − tso E→N )· tsolh ⎪ lh ⎪ ⎪ ⎪ →N l→h,E→P→N E→N ⎪ (ril→h,E→N , ri j ) f1 ⎪ j ⎪ ⎪ ⎪ E→N · 0 + (1 − tso E→N )· ⎪ tso ⎪ lh lh ⎨ E→N ) ⎪ ( l,E h,N (ril→h,E→P→N ) = f2 .d ri j , ri j j ⎪ E→N · 0 + (1 − tso E→N )· ⎪ ⎪ tsolh ⎪ lh ⎪ ⎪ l→h,E→N →N E→N ⎪ ⎪ f (r ) ⎪ 3 ij ⎪ ⎪ ⎪ ⎩0

( ' ) ( ' ) q rihj ,N > 0 and q rihj ,P > 0 ( ' ) ( ' ) q rihj ,N = 0 and q rihj ,P > 0 , ( ' ) ( ' ) q rihj ,N > 0 and q rihj ,P = 0 ( ' ) ( ' ) q rihj ,N = 0 and q rihj ,P = 0

(3.8) Different functions are used to deal with multiple computational references. By using the arithmetic mean operator, we have that ( ) →N f 1E→N ril→h,E→N , ril→h,E→P→N j j .



=

eh ,eh ' ∈E,h ' / =h

) ∑ ) ( ( h ' ,N h ' ,P + e ,e ' ∈E,h ' /=h d rih,N d rih,N j ,ri j j ,ri j h h ( ' ) ( ' ) q rihj ,N +q rihj ,P

( ) E→N ril→h,E→P→N = . f2 j

( ) →N E→N ril→h,E→N = . f3 j (

) ( h ' ,P d rih,N , r j ij ( ' ) h ,P q ri j

∑ eh ,eh

' ∈E,h ' / =h

eh ,eh

' ∈E,h ' / =h

) ( h ' ,N d rih,N , r j ij ( ' ) h ,N q ri j



)

(

(3.9)

(3.10)

(3.11) )

→N l→h,E→P→N ≤ 1, .0 ≤ f 2E→N ril→h,E→P→N ≤ 1, Clearly, .0 ≤ f 1E→N ril→h,E→N ,r j j ( ) ij l→h,E→N →N E→N ri j ≤ 1. and .0 ≤ f 3

h,N Property 3.2 Let.ril,E j ,.ri j be the same as in Definition 3.4. The following statements hold: ) ( ) ( h,N l,E = d rih,N (1) .d ril,E , j , ri j j , ri j ) ( l,E h,N ≤ 1. (2) .0 ≤ d ri j , ri j

Property 3.2 can be easily proved. h,P E→P Definition 3.5 Given matrix elements .ril,E , j , .ri j , and trust sociomatrix . T SoM l,E h,P the distance between .ri j and .ri j can be defined as

36

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

⎧ ) ( ) ⎨ tso E→P · 0 + (1 − tso E→P ) · f E→P (r l→h,E→P ) q r l→h,E→P > 0 ( ij i j lh lh l,E h,P ) ( , = .d ri j , ri j ⎩0 =0 q ril→h,E→P j

(3.12) where .ril→h,E→P is the selected reference used to assist in calculating the value of j ) ( l,E h,P l→h,E→P E→P .d ri j , ri j , and . f (·) is the function of .ri j . As .el has little knowledge for assigning a value to the position .(xi , a j ), he/she should select the other DMs that consider the position to belong to Case Negative or Positive as the references. we consider that .el can select two types of references: (i) h ' ,P .ri j , which shows that .el chooses .eh ' ’s opinion about the position as the reference, '

and .eh ' considers the position to belong to Case Positive; (ii) .rihj ,N , which indicates that the selected reference, i.e.,.eh ' ’s opinion about the position, holds that the position belongs to Case Negative. Specifically, we have . ij

)| ( ' ) } { ( h ' ,P | h ,P 0 ≤ q r ≤ q − 2 , r l→h,E→P→P = d rih,P , r j ij ij

(3.13)

( ' ) )| } { ( h ' ,N | 0 ≤ q rihj ,N ≤ q − 2 , r l→h,E→N →P = d rih,P j , ri j

(3.14)

. ij

→P where .eh ' ∈ E, .h ' /= h, .ril→h,E→P→P and .ril→h,E→N are the references that conj j sider the position .(xi , a j ) to belong to Case Positive and Negative, respectively. ' ' ' h ' ,P .q(ri j ) is the number of .rihj ,P , and .q(rihj ,N ) is the number of .rihj ,N . .q − 2 indicates can that .el and .eh are excluded from the reference selection. We see that .ril→h,E→P j '

→P and .ril→h,E→N . If .q(rihj ,P ) = 0 be divided into two types, namely .ril→h,E→P→P j j '

and .q(rihj ,N ) = 0, it means that no DM considers the position to be Case Positive or h,P Negative except .eh . In this case, the distance .d(ril,E j , ri j ) can be set to 0 because .el can only rely on .eh ’s opinion on the position and has no other reference. Equation (3.12) can be improved to the following: ⎧ E→P · 0 + (1 − tso E→P )· tsolh ⎪ lh ⎪ ⎪ ⎪ l→h,E→P→P l→h,E→N →P ⎪ , ri j ) ⎪ f 1E→P (ri j ⎪ ⎪ ⎪ E→P E→P ⎪ · 0 + (1 − tso )· ⎪ tsolh ⎨ E→P l→h,E→N →P lh ( ) ⎪ l,E h,P (ri j ) .d ri j , ri j = f2 ⎪ E→P · 0 + (1 − tso E→P )· ⎪ tsolh ⎪ ⎪ lh ⎪ ⎪ E→P ⎪ (ril→h,E→P→P ) f3 ⎪ ⎪ j ⎪ ⎪ ⎪ ⎩0

( ' ) ( ' ) q rihj ,N > 0 and q rihj ,P > 0 ( ' ) ( ' ) q rihj ,P = 0 and q rihj ,N > 0 , ( ' ) ( ' ) q rihj ,P > 0 and q rihj ,N = 0 ( ' ) ( ' ) q rihj ,P = 0 and q rihj ,N = 0

(3.15) We use different functions to address multiple references. By using the arithmetic mean operator, we have that

3.3 Procedure of Trust and Behavior Analysis-Based Fusion Method

37

( ) →P f 1E→P ril→h,E→P→P , ril→h,E→N j j .



=

eh ,eh ' ∈E,h ' / =h

) ∑ ) ( ( h ' ,P h ' ,N + e ,e ' ∈E,h ' /=h d rih,P d rih,P j ,ri j j ,ri j h h ( ' ) ( ' ) q rihj ,P +q rihj ,N

( ) →P E→P ril→h,E→N = . f2 j

( ) E→P ril→h,E→P→P = . f3 j

) ( h ' ,N d rih,P , r j ij ( ' ) h ,N q ri j

∑ eh ,eh

' ∈E,h ' / =h

eh ,eh

' ∈E,h ' / =h



) ( h ' ,P d rih,P , r j ij ( ' ) h ,P q ri j

(3.16)

(3.17)

(3.18)

( ) ( ) l→h,E→N →P l→h,E→N →P E→P ≤ 1, r Clearly,.0 ≤ f 1E→P ril→h,E→P→P , r . 0 ≤ f 2 j ij ij ( ) l→h,E→P→P E→P ≤ 1, and .0 ≤ f 3 ri j ≤ 1. h,P Property 3.3 Let.ril,E j ,.ri j be the same as in Definition 3.5. The following statements hold: ) ( ) ( h,P h,P l,E = d r (1) .d ril,E , r , r , j ij ij ij ) ( h,P ≤ 1. (2) .0 ≤ d ril,E j , ri j

Property 3.3 can be easily proved. Definition 3.6 The distance between two individual opinions is defined as ( ) d Dis C R l , C R h =

.

m n 1 ∑ ∑ Dis ( l,Case h,Case ) ri j d , ri j m × n i=1 j=1

(3.19)

( ) h,Case where .d Dis ril,Case , r is the distance between .ril,Case and .rih,Case , which can j ij j j be calculated by Definitions 3.3–3.5. Clearly, .0 ≤ d Dis (C R l , C R h ) ≤ 1.

3.3.3 Generating the Weights of DMs Before aggregating individual opinions, the weights of DMs should be determined. According to the majority principle, the greater the difference between the opinion of an DM and that of others, the less weight the DM should be given [49, 50]. Individual selection behaviors regarding attributes and alternatives can also be regarded as an important factor in determining the weights of DMs in SH-MAGDM problems.

38

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

) ( Definition 3.7 Given any two individual opinions .C R l = ril,Case and .C R h = j m×n ( ) rih,Case , the difference between them about the selection of the position.(xi , a j ) j m×n

is defined as

⎧ ( ) ( ) ) ⎨ 0, i f [ ril,Case ( = [ rih,Case j j l,E h,P ) ( ), ( = .d ri j , ri j ⎩ 1, i f [ r l,Case /= [ r h,Case ij ij

(3.20)

Therefore, the overall difference between them regarding the selection of all matrix elements is computed by ( ) d Sel C R l , C R h =

.

n m 1 ∑ ∑ Sel ( l,Case h,Case ) ri j d , ri j m × n i=1 j=1

(3.21)

Clearly, .0 ≤ d Sel (C R l , C R h ) ≤ 1. We know that .d Dis (C R l , C R h ) is used to measure the distance between two individual opinions and .d Sel (C R l , C R h ) calculates the difference about individual selection behaviors. Both of them are related to the calculation of the weights of DMs. We aggregate these two indicators to obtain the fusion value associated with the weight of DM .el as: ∑q

Dis (C R l , C R h ) h=1,l/=h d ∑q ∑q Dis (C R l , C R h ) l=1 h=1,l/=h d

∑q

h=1,l/=h

· ∑q ∑q

d Sel (C R l , C R h )

. d Sel (C R l , C R h ) (3.22) ∑q ∑q When . l=1 h=1,l/=h d Sel (C R l , C R h ) = 0, we consider that the value of .φ(el ) depends entirely on the distance measures among individual opinions, and vice versa. In this way, the weight of DM .el can be calculated by φ(el ) =

.

l=1

h=1,l/=h

1 − φ(el ) , ∀el ∈ E. l=1 (1 − φ(el ))

λ = ∑q

. l

(3.23)

∑q Clearly, .0 ≤ λl ≤ 1, .l = 1, 2, . . . , q, and . l=1 λl = 1. Equally importantly, cases Negative and Positive indicate that a certain DM has a clear understanding of the related position, while Case Empty represents that the DM fails to assign a value to the position. The DM with Case Negative or Positive should be given greater weight than that with Case Empty in terms of the clarity of decision recognition. Therefore, we define the decision clarity index as follows. Definition 3.8 The decision clarity index of DM .el can be defined as .

DC Il = 1 −

m ∑ n ∑ num([l (riCase ) == Empt y) j i=1 j=1

m×n

(3.24)

3.3 Procedure of Trust and Behavior Analysis-Based Fusion Method

39

where .num([l (riCase ) == Empt y) is the number of matrix elements in .C R l that j belong to Case Empty. We can use the decision clarity index to adjust the weight as: DC Il · λl λa = ∑q . l=1 DC Il · λl

. l

(3.25)

∑q Clearly, .0 ≤ λla ≤ 1, .l = 1, 2, . . . , q, and . l=1 λla = 1.

3.3.4 Fusing SH Individual Evaluation Information The reason we cannot directly fuse matrix elements belonging to different cases is that Case Empty or Case Negative would cause the relevant positions to be unassigned. In this section, we propose a complementary method that aims to fill in the unassigned positions. As described in Table 3.2, Case Negative means that a certain DM considers that a position should be removed. This is because the associated attribute or alternative should be removed. From another perspective, this is equivalent to requiring that the relevant alternative is not selected as the best and the relevant attribute has no effect on the evaluation. Therefore, for the alternative that belongs to Case Negative, we can assign a minimum value of 0 to the corresponding position according to each attribute. For the attribute that belongs to Case Negative, we assign the same value to the corresponding position. Suppose alternative .xi in the opinion .ril,N j belongs to Case Negative. We can obtain the following equation r cil,N j = 0, j = 1, 2, . . . , n, f or γl (x i ) = N egative,

.

(3.26)

l,N where .r cil,N j is the updated value of .ri j . After exploring the complementary method for alternatives, we proceed to attributes. If attribute .a j in the opinion .ril,N j belongs to Case Negative, then we have

r cil,N j = 0, i = 1, 2, . . . , m, f or γl (a j ) = N egative.

.

(3.27)

By using Eqs. (3.26) and (3.27), the position.(xi , a j ) that belongs to Case Negative is assigned a value. Case Empty shows that an DM has little understanding of the attribute or the alternative and therefore he/she cannot assign a value to the relevant position. In this case, the complement for the position depends on the DMs that consider the position to belong to the other two cases. Then, we have

40

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

r cil,E j =

q ∑

.

a

λh · r cih,N j +

q ∑

a

λh · r cih,P j

(3.28)

h=1

h=1 a

l,E where .r cil,E j is the updated value of .ri j , and .λh is the adjusted weight of .eh such that

λa a λ = ∑q h

. h

h=1

λah

, f or [h (rih,Case ) /= Empt y. j

(3.29)

For the element in the position .(xi , a j ) that belongs to Case Positive, we set = ril,P j , .i = 1, 2, . . . , m, . j = 1, 2, . . . , n. By using the above complementary method, all the positions in each DM’s opinion are assigned values. Let the fused )m×n , .l = 1, 2, . . . , q, in which .r cil,Case has individual opinions be . RC l = (r cil,Case j j been discussed above. Algorithm 1 presents the procedure for fusing SH evaluation information by using the proposed trust and behavior analysis-based (TBA) method.

l,P .r ci j

Algorithm 1 Procedure of trust and behavior analysis-based fusion method Require: Comprehensive individual opinions C V l , l = 1, 2, . . . , q, trust sociomatrices T SoM E→N , T SoM E→P , T SoM N →P , the weight vector of attributes ω. Ensure: The group opinion and the ranking of alternatives. 1: Using the method in [8] to normalize the individual opinions into C R l , l = 1, 2, . . . , q. 2: Apply Eq. (3.19) to calculate the distances among individual opinions. 3: Use Eq. (3.25) to compute the weights of DMs, denoted as λla , l = 1, 2, . . . , q. 4: Use the complementary method presented in Sect. 3.3.4 to obtain the fused individual opinions, denoted as RC l , l = 1, 2, . . . , q. ∑q 5: Calculate the group opinion as R G = (riGj )m×n , where riGj = l=1 λla · r cil,Case . ∑ j 6: The evaluation values of the alternatives can be obtained as E(xi ) = nj=1 ω j · riGj . Rank the alternatives based on the evaluation values and determine the best alternative. 7: End.

3.4 Discussion and Comparative Analysis This section first analyses the proposed weight determination method for DMs. Then, a comparative analysis is presented. The data used in the numerical experiments are from Sect. 8.1.

3.4 Discussion and Comparative Analysis Fig. 3.4 Weight distributions by using the proposed weight determination method under different decision scenarios

41

0.28 (1) (2) (3) (4)

0.26 0.24

or al

0.2

l

0.22

0.18 0.16 0.14 0.12 0.1

1

2

3

4

5

6

el

3.4.1 Further Analysis on the Calculation of the Weights of DMs This chapter proposes a novel weight determination method for DMs that takes into account three parameters, namely the distances between individual opinions, the overall difference regarding the selection of all matrix elements, and the decision clarity index. We extend the method to the following categories for different decision scenarios: (1) The weight calculation depends only on the distance measures between individual opinions. (2) The weight calculation depends only on the overall difference regarding the selection of all matrix elements. (3) Both of the above are involved, but the decision clarity index is not considered. (4) Both of the above are involved and then the decision clarity index is used to modify the obtained weights. Figure 3.4 shows the weight distributions under different decision scenarios. Note that the symbols (1), (2), (3), and (4) in Fig. 3.4 represent the four categories of decision scenarios presented at the beginning of Sect. 3.4.1. The following observations are made. (1) Under different decision scenarios, the proposed weight determination method may result in different weight distributions. As shown in Fig. 3.4, if only the distances between individual opinions are involved, the maximum weight is the weight of DM.e1 (i.e.,.λ1 = 0.2692). When the second type of decision scenario is used,.e1 still has the largest weight, but the value is reduced to 0.1848. The weight distribution is closely related to the final alternative ranking. Therefore, it is important to select the most suitable decision scenario to calculate the weights. In

42

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

Fig. 3.5 Decision clarity indexes of DMs

1 0.9

Decision clarity index DCIl

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

e1

e2

e3

e4

e5

e6

el

SH-MAGDM, DMs may have their individual sets of attributes and alternatives. Such heterogeneous selection behaviors should be considered as an important factor in the calculation of DMs’ weights. (2) The decision clarity index plays an important role in the weight calculation. As shown in Fig. 3.4, if using the third type of decision scenario, the ranking of DMs’ weights is obtained as .λ5 > λ4 > λ3 > λ6 > λ2 > λ1 . However, when the decision clarity index is considered, the ranking is changed to .λa3 > λa1 > λa5 > λa4 > λa2 > λa6 . We find that the weight of .e3 increases significantly. This is because the decision clarity index of DM .e3 peaks at 1 as shown in Fig. 3.5, which indicates that he/she has a clear understanding of the decision problem. The weight of DM .e3 should be assigned a larger value in terms of the decision clarity. In the following, we examine the effect of trust on the distance measures among individual opinions and on the determination of weight distribution. We choose the weight of DM .e6 as the observation sample because .e6 has the smallest decision clarity index (see Fig. 3.5). We set two decision situations: one is that all the values in the three types of trust sociomatrices are set to 0, and the other one is that all the above trust values are equal to 1. According to Eq. (3.5) (or Eq. (3.12)), the closer h,N E→N E→P (or .tsolh ) is to 1, the closer the distance .d(ril,E trust value .tsolh j , ri j ) (or l,E h,P .d(ri j , ri j )) is to 0. In this way, the distances between .e6 and other DMs decrease. As a consequence, the weight of.e6 increases. When all the trust values are set to 0, the opposite result occurs. When adopting the first decision situation, the weight of .e6 is 0.1184; if the second situation is used, the weight increases to 0.1191. This indicates that trust is an important factor affecting the distance measure of heterogeneous evaluation information and weight assignment.

3.4 Discussion and Comparative Analysis

43

3.4.2 Dealing with Extreme Decision Situations When using the proposed method to fuse heterogeneous information, some extreme decision situations should be addressed. Here, we list two extreme situations. Example 3.3 Given an SH-MAGDM problem that consists of three DMs (labeled as .e1 , e2 , e3 ), two benefit attributes (labeled as .a1 , a2 ), and three alternatives (labeled as .x1 , x2 , x3 ), the normalized individual opinions are shown as follows: ⎛ P P⎞ ⎛ E E⎞ ⎛ P P⎞ 1 1 ∅ ∅ 0 0 P P⎠ 1 1 .R = ⎝ 0 , . R 2 = ⎝ ∅ E ∅ E ⎠, . R 3 = ⎝ 1 P 0 P ⎠. 1P 0P ∅E ∅E 0P 1P Example 3.4 (Continuing Example 3.3). Let the normalized opinions of .e1 and .e3 be the same as Example 3.3. The normalized opinion of .e2 is changed as ⎛ N N⎞ ∅ ∅ N N ⎠ 2 ∅ .R = ⎝ ∅ . ∅N ∅N Tables 3.5 and 3.6 show the distributions of the DMs’ weights by using Eq. (3.25) for the above two examples. In Example 3.3, we find that whether all trust values are set to 0 or 1, the weight of .e2 is always equal to 0. In general, it only makes sense for a DM to have a weight greater than 0. In Example 3.4, although no DM has a weight of 0, DM .e2 is completely different from the other two DMs in terms of selecting attributes and alternatives, which will also adversely affect the decision. We consider the following three aspects to solve the contradictions in Examples 3.3 and 3.4. (1) If possible, DM in the above two examples can withdraw from the decision. In particular, as shown in Example 3.4, .e2 has little knowledge of the decision problem and therefore cannot assign any values to the alternatives with respect to the attributes.

Table 3.5 Weights of DMs obtained by using Example 3.3 All trust values are set as 0 All trust values are set as 1

.λ1

a

.λ2

.λ3

0.5 0.5

0 0

0.5 0.5

a

a

Table 3.6 Weights of DMs obtained by using Example 3.4 .λ1

.λ2

.λ3

0.3077 0.3077

0.3846 0.3846

0.3077 0.3077

a

All trust values are set as 0 All trust values are set as 1

a

a

44

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

(2) The original attributes and alternatives in the examples should be reconstructed. DMs reevaluate the attributes and alternatives that are judged to belong to cases Empty and Negative. The purpose is to effectively increase the number of matrix elements in the updated opinions that belong to Case Positive. (3) More in-depth discussions can be held among DMs to maximize their knowledge of most or even all attributes and alternatives and to avoid the situation where some attributes and alternatives cannot be selected for evaluation. To reduce the probability of the above extremes, we should propose a solution from the three elements of the decision problem (i.e., alternatives, attributes, and DMs). The decision organization first pre-selects possible alternatives and removes some that are irrelevant or poorly performing. Then, DMs familiar with the decision field are invited, and trust relationships between them are accounted for. After discussion and negotiation, the DMs come up with a set of attributes that they agree with. After the sets of attributes and DMs are determined, DMs can further screen the alternatives to improve the decision-making efficiency.

3.4.3 Comparative Analysis To handle the SH-MAGDM problem with individual sets of attributes and alternatives, Dong et al. [19] proposed a resolution framework that allows DMs to dynamically adjust their individual attributes and alternatives. The framework is divided into two stages: the selection process and the consensus process. The main feature is that they hold that in the decision process, it is not necessary to reach a consensus on the use of the set of attributes as long as the majority accept the best alternative. To ensure comparability, we set the following requirements. (i) DMs can independently choose their individual sets of attributes and alternatives. (ii) The weights of the attributes are given in advance. All DMs share the same weight distribution of the attributes. (iii) For an ordered attribute set (or alternative set), we determine the two parameters by one-quarter of a point and three-quarters of a point, which are used to divide the importance of the attributes or alternatives. (iv) As Dong et al. [19] failed to consider selection behaviors, we set the values of the matrix elements that belong to Case Empty or Case Negative to 0. The following two stages are involved. Stage 1: Selection process. First of all, we obtain the initial rankings of individual alternatives, as shown in Table 3.7. The initial collective set of alternatives is obtained by . X (c,0) = X (1,0) ∪ · · · ∪ X (6,0) , which is presented in Table 3.8. . X (l,0) represents the individual set of alternatives for DM .el in the initial consensus iteration. Similarly, the collective set of attributes is obtained by . A(c,0) = A(1,0) ∪ · · · ∪ A(6,0) (see Table 3.9). . A(l,0) is the individual set of attributes for .el . . R (c,0) (xic,0 ) denotes the number of DMs who select alternative .xic,0 ∈ X (c,0) as its best alternative. According to the result in Table 3.10, we obtain the ranking .o(c,0) = (2, 6, 6, 6, 2, 2, 2, 6, 6, 1)T . Therefore, (c,0) = x10 . the best alternative for the group is .x10

3.4 Discussion and Comparative Analysis

45

Table 3.7 Initial rankings of individual alternatives .o

(1,0)

.o

(2,0)

.o

7 8 9 6 2 4 10 3 1

6 3 10 4 9 1 5

(3,0)

.o

10 6 1 4 9 3 5 8 7

(4,0)

.o

1 5 10 6 4 3 9

(5,0)

.o

5 6 10 1 5 9 3

(6,0)

10 8 5 6 3 1 9 2 7 4

Table 3.8 The collective set of alternatives (c,0)

(c,0)

(c,0)

(c,0)

(c,0)

(c,0)

(c,0)

(c,0)

(c,0)

(c,0)

.x1

.x2

.x3

.x4

.x5

.x6

.x7

.x8

.x9

. x 10

.x1

.x2

.x3

.x4

.x5

.x6

.x7

.x8

.x9

. x 10

Table 3.9 The collective set of attributes (c,0)

(c,0)

(c,0)

(c,0)

(c,0)

(c,0)

(c,0)

(c,0)

.a1

.a2

.a3

.a4

.a5

.a6

.a7

.a8

.a1

.a2

.a3

.a4

.a5

.a6

.a7

.a8

(c,0)

Table 3.10 The . R (c,0) (xi .R

(c,0) (x (c,0) ) i

)(i = 1, . . . , 10)

(c,0) .x1

.x2

(c,0)

.x3

(c,0)

.x4

(c,0)

.x5

(c,0)

.x6

(c,0)

.x7

(c,0)

.x8

(c,0)

.x9

(c,0)

. x 10

(c,0)

1

0

0

0

1

1

1

0

0

2

Stage 2: Consensus-reaching process. By using the definition in [19], the current consensus level is calculated as .cl0 = 1/3. Then, a feedback procedure is applied to improve the consensus level. We first calculate the importance degrees of attributes and alternatives, as shown in Tables 3.11 and 3.12. The attributes and alternatives are classified into three types. The bold values in the tables correspond to the attributes or alternatives that are identified as having a low degree of importance. DMs who have these attributes or Table 3.11 Importance degrees of attributes . A(c,0) (c,0) (c,0) (c,0) (c,0) .a1 .a2 .a3 .a4 .I d

(c,0) (a c,0 ) j

0.1916

0.5697

0.6884

0.5507

(c,0)

(c,0)

(c,0)

(c,0)

.a5

.a6

.a7

.a8

0.3949

0.7051

0.1916

0.5697

46

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

Table 3.12 Importance degrees of alternatives . X (c,0) (c,0) (c,0) (c,0) (c,0) (c,0) .x1 .x2 .x3 .x4 .x5 .I d

(c,0) (x (c,0) ) i

1

0.1391 1

1

(c,0)

.x6

0.7422 1

(c,0)

.x7

(c,0)

.x8

(c,0)

.x9

0.2578 0.1391 1

(c,0)

. x 10

1

alternatives are advised to remove them. As a consequence, alternatives .x2 , x7 , x8 and attributes .a1 , a7 are removed from the decision. Then, DMs will be suggested to increase the evaluation of alternative .x10 to encourage the DMs to achieve a higher level of consensus on.x10 . We see that although the optimal alternative obtained by the two methods is the same, there are significant differences in the processing of heterogeneous evaluation information: (1) Dong et al. [19] chose the optimal collective solution based on the number of DMs who agreed that it was the best one and not based on the weights of DMs. In practical decision-making, DMs usually have different interests and levels of understanding of the decision-making problem, so they may be given different weights. In view of this, this chapter proposes a complement method that uses the weighted average operator to collect the individual opinions. (2) In Ref. [19], attributes that were determined to have a low degree of importance would be removed in the next round of consensus iteration. Although this does not affect the choice of the optimal solution, the removal of a component of a decision problem may reduce decision satisfaction. In this study, Case Negative means that a certain DM considers that a certain position should be removed. Instead of performing the deletion, we assign a minimum value of 0 to the corresponding position according to each attribute. This ensures that the relevant alternative is not selected as the best one and is not removed. (3) The essential difference between the two methods is that they use different perspectives to deal with heterogeneous evaluation information. Dong et al. [19] argued that DMs need not reach a consensus on the use of the set of attributes. Therefore, they proposed a consensus reaching model in which the individual alternatives and attributes could be dynamically adjusted. Our method develops a trust and behavior analysis-based fusion method, which considers the trust relationships and selection behaviors among DMs as the key to deal with heterogeneous evaluation information.

3.5 Conclusions We propose a trust and behavior analysis-based fusion method for SH-MAGDM scenarios with individual sets of attributes and alternatives. The main contributions of this chapter are summarized below. (1) The DMs’ selection behaviors for attributes and alternatives in SH-MAGDM problems are analyzed and classified into three categories: Empty, Negative,

References

47

and Positive. Several propositions for determining which case a matrix element belongs to are provided. (2) This chapter uses trust relationships and reference selection to assist the distance measures of heterogeneous evaluation information. Three types of trust sociomatrices are presented according to different cases of selection behaviors. Three situations in which references are needed to calculate the distance are elaborated. (3) A novel weight-determination method for DMs is proposed, which combines three parameters: the distances between DMs, the overall difference between DMs regarding the selection of all matrix elements, and the decision clarity index. The decision clarity index is defined to adjust the obtained weights from the perspective of the clarity of evaluation information. (4) A trust and behavior analysis-based fusion method for SH-MAGDM problems is developed, in which a complement method is designed to populate the positions that are not assigned values. The following extensions can be considered in the future: (1) It is worth exploring the application of TBA fusion method extended to largescale decision-making (LSDM) scenarios. Due to the complexity of real-world decisions, LSDM has attracted extensive attention in the field of decision science [51–54]. Addressing SH-MAGDM problems involving large-scale DMS, the TBA fusion method may need to be improved, such as how to deal with the clustering of SH evaluation information. (2) The consensus building of SH evaluation information is another issue worth discussing. Obtaining a decision result with a sufficiently high group consensus can ensure the decision quality.

References 1. Kim, S. H., Choi, S. H., & Kim, J. K. (1999). An interactive procedure for multiple attribute group decision making with incomplete information: Range-based approach. European Journal of Operational Research, 118(1), 139–152. 2. Liu, P., Chen, S. M., & Wang, P. (2018). Multiple-attribute group decision-making based on q-Rung orthopair fuzzy power maclaurin symmetric mean operators. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 50(10), 3741–3756. 3. Pang, J., & Liang, J. (2012). Evaluation of the results of multi-attribute group decision-making with linguistic information. Omega, 40(3), 294–301. 4. Wang, J. Q., Yu, S. M., Wang, J., Chen, Q. H., Zhang, H. Y., & Chen, X. H. (2015). An interval type-2 fuzzy number based approach for multi-criteria group decision-making problems. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 23(4), 565–588. 5. Xu, Z. (2007). Multiple-attribute group decision making with different formats of preference information on attributes. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 37(6), 1500–1511. 6. Yu, G. F., Li, D. F., & Fei, W. (2018). A novel method for heterogeneous multi-attribute group decision making with preference deviation. Computers & Industrial Engineering, 124, 58–64.

48

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

7. Gao, Y., & Li, D. S. (2019). A consensus model for heterogeneous multi-attribute group decision making with several attribute sets. Expert Systems with Applications, 125, 69–80. 8. Xu, Z. S. (2009). An automatic approach to reaching consensus in multiple attribute group decision making. Computer & Industrial Engineering, 56(4), 1369–1374. 9. Yu, S., Du, Z., Wang, J., Luo, H., & Lin, X. (2021). Trust and behavior analysis-based fusion method for heterogeneous multiple attribute group decision-making. Computers & Industrial Engineering, 152, 106992. 10. Palomares, I., Rodríguez, R. M., & Martínez, L. (2013). An attitude-driven web consensus support system for heterogeneous group decision making. Expert Systems with Applications, 40(1), 139–149. 11. Chen, X., Zhang, H., & Dong, Y. (2015). The fusion process with heterogeneous preference structures in group decision making: A survey. Information Fusion, 24, 72–83. 12. Zhang, X., Xu, Z., & Wang, H. (2015). Heterogeneous multiple criteria group decision making with incomplete weight information: A deviation modeling approach. Information Fusion, 25, 49–62. 13. Zhang, F., Ignatius, J., Zhao, Y., Lim, C. P., Ghasemi, M., & Ng, P. S. (2015). An improved consensus-based group decision making model with heterogeneous information. Applied Soft Computing, 35, 850–863. 14. Li, G., Kou, G., & Peng, Y. (2016). A group decision making model for integrating heterogeneous information. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 48(6), 982–992. 15. Zhang, B., Dong, Y., & Herrera-Viedma, E. (2019). Group decision making with heterogeneous preference structures: An automatic mechanism to support consensus reaching. Group Decision and Negotiation, 28(3), 585–617. 16. Liang, Y., Qin, J., Martínez, L., & Liu, J. (2020). A heterogeneous QUALIFLEX method with criteria interaction for multi-criteria group decision making. Information Sciences, 512, 1481–1502. 17. Zhang, F., Ju, Y., Gonzalez, E. D. S., Wang, A., Dong, P., & Giannakis, M. (2021). Evaluation of construction and demolition waste utilization schemes under uncertain environment: A fuzzy heterogeneous multi-criteria decision-making approach. Journal of Cleaner Production, 313, 127907. 18. Kou, G., Peng, Y., Chao, X., Herrera-Viedma, E., & Alsaadi, F. E. (2021). A geometrical method for consensus building in GDM with incomplete heterogeneous preference information. Applied Soft Computing, 105, 107224. 19. Dong, Y., Zhang, H., & Herrera-Viedma, E. (2016). Consensus reaching model in the complex and dynamic MAGDM problem. Knowledge-Based Systems, 106, 206–219. 20. Lourenzuttia, R., & Krohling, R. A. (2016). A generalized TOPSIS method for group decision making with heterogeneous information in a dynamic environment. Information Sciences, 330, 1–18. 21. Du, Z. J., Luo, H. Y., Lin, X. D., & Yu, S. M. (2020). A trust-similarity analysis-based clustering method for large-scale group decision-making under a social network. Information Fusion, 63, 13–29. 22. Luo, H., Cheng, S., Zhou, W., Yu, S., & Lin, X. (2021). A study on the impact of linguistic persuasive styles on the sales volume of live streaming products in social E-commerce environment. Mathematics, 9(13), 1576. 23. Wu, J., Chiclana, F., Fujita, H., & Herrera-Viedma, E. (2017). A visual interaction consensus model for social network group decision making with trust propagation. Knowledge-Based Systems, 122, 39–50. 24. Liu, Y., Liang, C., Chiclana, F., & Wu, J. (2017). A trust induced recommendation mechanism for reaching consensus in group decision making. Knowledge-Based Systems, 119, 221–231. 25. Wu, J., Chiclana, F., & Herrera-Viedma, E. (2015). Trust based consensus model for social network in an incomplete linguistic information context. Applied Soft Computing, 35, 827– 839.

References

49

26. Dong, Y., Ding, Z., Martínez, L., & Herrera, F. (2017). Managing consensus based on leadership in opinion dynamics. Information Sciences, 397, 187–205. 27. Dong, Y., Zha, Q., Zhang, H., Kou, G., Fujita, H., Chiclana, F., & Herrera-Viedma, E. (2018). Consensus reaching in social network group decision making: Research paradigms and challenges. Knowledge-Based Systems, 162, 3–13. 28. Du, Z. J., Yu, S. M., Luo, H. Y., & Lin, X. D. (2021). Consensus convergence in large-group social network environment: Coordination between trust relationship and opinion similarity. Knowledge-Based Systems, 217, 106828. 29. Wu, J., Dai, L. F., Chiclana, F., Fujita, H., & Herrera-Viedma, E. (2018). A minimum adjustment cost feedback mechanism based consensus model for group decision making under social network with distributed linguistic trust. Information Fusion, 41, 232–242. 30. Zhang, Z., Gao, Y., & Li, Z. (2020). Consensus reaching for social network group decision making by considering leadership and bounded confidence. Knowledge-Based Systems, 204, 106240. 31. Yu, S. M., Du, Z. J., Lin, X. D., Luo, H. Y., & Wang, J. Q. (2020). A stochastic dominancebased approach for hotel selection under probabilistic linguistic environment. Mathematics, 8(9), 1525. 32. Yu, S. M., Wang, J., & Wang, J. Q. (2017). An interval type-2 fuzzy likelihood-based MABAC approach and its application in selecting hotels on a tourism website. International Journal of Fuzzy Systems, 19(1), 47–61. 33. Zhang, X., Cui, L., & Wang, Y. (2013). Commtrust: Computing multi-dimensional trust by mining e-commerce feedback comments. IEEE Transactions on Knowledge and Data Engineering, 26(7), 1631–1643. 34. Chu, J., Wang, Y., Liu, X., & Liu, Y. (2020). Social network community analysis based largescale group decision making approach with incomplete fuzzy preference relations. Information Fusion, 60, 98–120. 35. Wu, T., Zhang, K., Liu, X., & Cao, C. (2019). A two-stage social trust network partition model for large-scale group decision-making problems. Knowledge-Based Systems, 163, 632–643. 36. Yu, S. M., Du, Z. J., Zhang, X., Luo, H., & Lin, X. (2022). Trust Cop-Kmeans clustering analysis and minimum-cost consensus model considering voluntary trust loss in social network large-scale decision-making. IEEE Transactions on Fuzzy Systems, 30(7), 2634–2648. 37. Yu, S. M., Du, Z. J., Zhang, X. Y., Luo, H. Y., & Lin, X. D. (2021). Punishment-driven consensus reaching model in social network large-scale decision-making with application to social capital selection. Applied Soft Computing, 113, 107912. 38. Luo, H., Cheng, S., Zhou, W., Song, W., Yu, S., & Lin, X. (2021). Research on the impact of online promotions on consumers’ impulsive online shopping intentions. Journal of Theoretical and Applied Electronic Commerce Research, 16(6), 2386–2404. 39. Nikbin, D., Aramo, T., Iranmanesh, M., & Ghobakhloo, M. (2022). Impact of brands ‘Facebook page characteristics and followers’ comments on trust building and purchase intention: Alternative attractiveness as moderator. Journal of Consumer Behaviour, 21(3), 494–508. 40. Sparks, B. A., & Browning, V. (2011). The impact of online reviews on hotel booking intentions and perception of trust. Tourism Management, 32(6), 1310–1323. 41. Yu, S. M., Wang, J., Wang, J. Q., & Li, L. (2018). A multi-criteria decision-making model for hotel selection with linguistic distribution assessments. Applied Soft Computing, 67, 741–755. 42. Chen, Y., Zhong, Y., Yu, S., Xiao, Y., & Chen, S. (2022). Exploring bidirectional performance of hotel attributes through online reviews based on sentiment analysis and Kano-IPA model. Applied Sciences, 12(2), 692. 43. Victor, P., Cornelis, C., De Cock, M., & Teredesai, A. M. (2011). Trust-and distrust-based recommendations for controversial reviews. IEEE Intelligent Systems, 26(1), 48–55. 44. Victor, P., Cornelis, C., De Cock, M., & da Silva, P. P. (2009). Gradual trust and distrust in recommender systems. Fuzzy Sets & Systems, 160(10), 1367–1382. 45. Yu, S., Zhang, X., Du, Z., & Chen, Y. (2023). A new multi-attribute decision making method for overvalued star ratings adjustment and its application in new energy vehicle selection. Mathematics, 11(9), 2037.

50

3 Trust and Behavior Analysis-Based Structure-Heterogeneous …

46. Yu, S. M., Zhang, H. Y., & Wang, J. Q. (2018). Hesitant fuzzy linguistic Maclaurin symmetric mean operators and their applications to multi-criteria decision-making problem. International Journal of Intelligent Systems, 33(5), 953–982. 47. Yu, S. M., Zhou, H., Chen, X. H., & Wang, J. Q. (2015). A multi-criteria decision-making method based on Heronian mean operators under a linguistic hesitant fuzzy environment. Asia-Pacific Journal of Operational Research, 32(5), 1550035. 48. Xu, X. H., Du, Z. J., & Chen, X. H. (2015). Consensus model for multi-criteria large-group emergency decision making considering non-cooperative behaviors and minority opinions. Decision Support Systems, 79, 150–160. 49. Du, Z. J., Yu, S. M., & Xu, X. H. (2020). Managing noncooperative behaviors in large-scale group decision-making: Integration of independent and supervised consensus-reaching models. Information Sciences, 531, 119–138. 50. Pang, Q., Wang, H., & Xu, Z. (2016). Probabilistic linguistic term sets in multi-attribute group decision making. Information Sciences, 369, 128–143. 51. Ding, R. X., Palomares, I., Wang, X., Yang, G. R., Liu, B., Dong, Y., & Herrera, F. (2020). Large-Scale decision-making: Characterization, taxonomy, challenges and future directions from an Artificial Intelligence and applications perspective. Information Fusion, 59, 84–102. 52. Zhong, X., Xu, X., & Yin, X. (2021). A multi-stage hybrid consensus reaching model for multiattribute large group decision-making: Integrating cardinal consensus and ordinal consensus. Computers & Industrial Engineering, 158, 107443. 53. Zhu, G. J., Cai, C. G., Pan, B., & Wang, P. (2021). A multi-agent linguistic-style large group decision-making method considering public expectations. International Journal of Computational Intelligence Systems, 14(1), 1–13. 54. Zuo, W. J., Li, D. F., Yu, G. F., & Zhang, L. P. (2019). A large group decision-making method and its application to the evaluation of property perceived service quality. Journal of Intelligent & Fuzzy Systems, 37(1), 1513–1527.

Chapter 4

Trust Cop-Kmeans Clustering Method

Abstract Social network large-scale decision-making (SNLSDM) has attracted widespread attention in the field of decision science. Clustering is one of the most important processes for solving SNLSDM problems. Traditional clustering methods are usually based solely on the similarity of opinions. When extending decisionmaking to social network contexts, it i.e. believed that social relationships among decision-makers (DMs) should serve as a reliable resource for clustering. We hold that low-trust DMs are not suitable to be assigned to the same subgroup, even though their opinions/preferences are sufficiently similar. This chapter proposes a trustconstrained K-means clustering algorithm, which is a semi-supervised clustering technique. It can overcome the defect of grouping low-trust DMs caused by traditional clustering methods based on similarity measures. A comparative analysis is implemented to explore the influence of trust constraints on clustering. Keywords Social network large-scale decision-making (SNLSDM) · Trust-constrained K-means clustering (TCop-Kmeans) · Must-link (ML) · Cannot-link (CL) · Trust-similarity analysis

4.1 Trust-Similarity Analysis-Based Decision Information Processing Social network large-scale decision-making (SNLSDM) is defined as a decision situation in which large-scale decision-makers (DMs) connected via a social network seek to achieve a common solution from a set of alternatives. Given an SNLSDM problem as follows: • A set of alternatives, denoted as . X = {x1 , x2 , . . . , xm }(m ≥ 2), which present possible solutions to the problem. • A set of DMs, denoted as . E = {e1 , e2 , . . . , eq }(q ≥ 20), who express their preferences on given alternatives.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Z. Du and S. Yu, Social Network Large-Scale Decision-Making, Uncertainty and Operations Research, https://doi.org/10.1007/978-981-99-7794-9_4

51

52

4 Trust Cop-Kmeans Clustering Method

• Let . P l = ( pil j )m×m be a fuzzy preference relation (FPR) provided by DM .el ∈ E, where . pil j ∈ [0, 1] indicates the preference degree of .xi over .x j , such that l l l l . pi j + p ji = 1, . pii = 0.5, for all .i, j = 1, 2, . . . , m. If . pi j > 0.5, then it denotes that .xi is preferred to .x j ; the larger . pil j , the greater the preference degree of .xi over l . x j ; if . pi j = 0.5, then it represents that there is no difference between . x i and . x j . An FPR can be considered efficient if the number of alternatives is small [1]. In this chapter, we use five alternatives in the illustrative example. • The trust statements between DMs are gathered as .T SoM = (tsolh )q×q , where .tsolh ∈ [0, 1] is the trust degree .el assigns to .eh . We assume that a complete trust sociomatrix (still denoted by .T SoM) has been obtained. Before implementing clustering, we use the trust-similarity analysis in Ref. [2] to process the original information, including the following steps: • Step 1: Compute the similarity degree between any two individual preferences, • Step 2: Calculate the average trust degree of any two DMs, and • Step 3: Establish an undirected sociomatrix and a similarity matrix. Most studies on reducing the dimension of large-scale DMs are based on the clustering of individual preferences/opinions [1, 3–5]. Therefore,we introduce the concept of similarity degree between two FPRs by using the Euclidean distance. Definition 4.1 ([6]) Let . P l and . P h be the preferences provided by DMs .el and .eh , respectively; then, the similarity degree between . P l and . P h is defined as (

sdlh = 1 − d P , P

.

l

h

)

( =1−

m(m − 1/2)

1 ∑m−1 ∑m i=1

l j=i+1 | pi j

) − pihj |

(4.1)

Clearly, .0 ≤ sdlh ≤ 1. The closer .sdlh is to 1, the more similar . P l and . P h . Then, a similarity matrix regarding .q DMs is established as . S M = (sdlh )q×q . By fusing the trust degree and similarity degree associated, the trust-similarity function can be obtained. Definition 4.2 ([7]) Let the tuple .θ = (tso, sd) be a trust-similarity function (TSF), in which .tso is a trust degree, and .sd is a similarity degree such that .tso, sd ∈ [0, 1]. The set of TSFs is denoted by .Θ = {θ = (tso, sd)|tso, sd ∈ [0, 1]}. By applying the TSF to a GDM problem, a trust-similarity matrix is established, denoted by .T S M = (T S Flh )q×q , where .T S Flh = (tsolh , sdlh ) is the TSF from DM .el to DM .eh . The first symbol .tsolh represents the trust degree that .el assigns to .eh , and the second one .sdlh is the similarity degree between . P l and . P h . Note 4.1 The initial trust statements of DMs are directed. In other words, .tsolh is not necessarily equal to .tsohl . If one DM does not trust the other one, even if their preferences are sufficiently similar, it may adversely affect the formation of a unified opinion. For example, .el trusts .eh greatly, but .eh does not trust .el so much. As a result, the question is whether .el and .eh can be assigned to the same cluster. From the

4.1 Trust-Similarity Analysis-Based Decision Information Processing

53

perspective of.el , the answer is yes, but for.eh , the opposite holds. Because.eh distrusts e , e is unwilling to let .el ’s preference be reflected in the collective preference.

. l . h

In this chapter, we argue that the trust relationships between DMs can be used as a clustering constraint (what we call the trust constraint) to prevent two low-trust DMs from being assigned to the same cluster. Conversely, if the trust between two DMs is significantly high, they may suggested to be classified into the same cluster. To measure the trust constraint between two DMs, we define the average trust degree by using trust aggregation. Without loss of generality, the weighted averaging (WA) operator is utilized to calculate the average trust degree of .el and .eh , as follows: T T atsolh = W A(tsolh , tsohl ) = wl,hl · tsolh + wh,lh · tsohl

.

(4.2)

T T where .wl,hl and .wh,lh represent the weights of .el and .eh , respectively, depending on T T T T , wh,lh ≤ 1 and .wl,hl + wh,lh = the trust degrees of the two DMs, such that .0 ≤ wl,hl 1. These weights can be calculated by using Yager’s ordered weighted averaging (OWA)-based procedure guided by trust degree [7, 8]. We hold that the high-trust DM should be given a greater weight in the calculation of average trust degree. The quantifier in this chapter is selected as ‘the most’. Let . Q be the basic unit-interval monotone (BUM) membership function of the fuzzy quantifier to implement the mapping operation:. Q : [0, 1] → [0, 1], such that. Q(0) = 0,. Q(1) = 1, and if.x > y then . Q(x) ≥ Q(y). The weight of .el relative to average trust degree is computed as

( T .wϑ(μ),l

=Q

tso(ϑ(μ)) tso(ϑ(2))

)

( −Q

) tso(ϑ(μ − 1)) , tso(ϑ(2))

(4.3)

∑μ where .tso(ϑ(μ)) = x=1 tso(ϑ(μ)), and .ϑ is a permutation, such that .tso(ϑ(x)) T T = wϑ(μ),l . This chapter is the .x-th largest value of set .{tsolh , tsohl }. Then, let .wl,hl √ defines the function . Q as . Q(r ) = r . Clearly, the following properties satisfy (i) .0 ≤ atsolh ≤ 1; (ii) .atsolh = atsohl ; and (iii) .min{tsolh , tsohl ≤ atsolh ≤ max{tsolh , tsohl }. In this way, the directed trust degrees of pairwise DMs are converted to the corresponding undirected ones. Definition 4.3 Let .T SoM = (tsolh )q×q be a complete directed sociomatrix; the undirected sociomatrix is denoted as .U T SoM = (atsolh )q×q , where atsolh = atsohl = W A(tsolh , tsohl )

.

(4.4)

Then, the undirected trust-similarity matrix is obtained as . AT S M = (AT S Flh )q×q , where . AT S Flh = (atsolh , sdlh ) is the average trust-similarity function. Example 4.1 A group decision-making event consists of four DMs .{e1 , e2 , e3 , e4 } and three alternatives .{x1 , x2 , x3 }. The DMs provide their preferences on alternatives as

54

4 Trust Cop-Kmeans Clustering Method



⎞ 0.5 0.1 0.3 1 . P = ⎝ 0.9 0.5 0.3 ⎠ , 0.7 0.7 0.5 ⎛

⎞ 0.5 0.6 0.2 3 . P = ⎝ 0.4 0.5 0.5 ⎠ , 0.8 0.5 0.5



⎞ 0.5 0.6 0.1 P 2 = ⎝ 0.4 0.5 0.2 ⎠ , 0.9 0.8 0.5 ⎛

⎞ 0.5 0.2 0.9 P 4 = ⎝ 0.8 0.5 0.1 ⎠ . 0.1 0.9 0.5

A complete sociomatrix is gathered as follows: ⎛

− ⎜ 0.3 . T SoM = ⎜ ⎝ 0.6 0.2

0.2 − 0.2 0.1

0.4 0.5 − 0.6

⎞ 0.8 0.1 ⎟ ⎟. 0.6 ⎠ −

By using Eq. (4.1), the similarity matrix is obtained as ⎛

− ⎜ 0.8297 .S M = ⎜ ⎝ 0.8268 0.7975

0.8297 − 0.9293 0.7102

0.8268 0.9293 − 0.7154

⎞ 0.7975 0.7102 ⎟ ⎟. 0.7154 ⎠ −

Therefore, the trust-similarity matrix can be obtained as ⎛

− ⎜ (0.3, 0.8297) .T S M = ⎜ ⎝ (0.6, 0.8268) (0.2, 0.7975)

(0.2, 0.8297) − (0.2, 0.9293) (0.1, 0.7102)

(0.4, 0.8268) (0.5, 0.9293) − (0.6, 0.7154)

⎞ (0.8, 0.7975) (0.1, 0.7102) ⎟ ⎟. (0.6, 0.7154) ⎠ −

By Eq. (4.2), the undirected trust-similarity matrix is obtained as ⎛

− ⎜ (0.24, 0.8297) . AT S M = ⎜ ⎝ (0.48, 0.8268) (0.32, 0.7975)

(0.24, 0.8297) − (0.2857, 0.9293) (0.1, 0.7102)

(0.48, 0.8268) (0.2857, 0.9293) − (0.6, 0.7154)

⎞ (0.32, 0.7975) (0.1, 0.7102) ⎟ ⎟. (0.6, 0.7154) ⎠ −

4.2 Trust Cop-Kmeans Clustering Algorithm With the undirected trust-similarity matrix as input, the trust-constrained K-means (TCop-Kmeans) clustering algorithm can be implemented. Traditional clustering methods take similarity as the only measurement attribute. This chapter introduces trust relationship as an additional clustering attribute, which is called the trust constraint. We hold that trust relationship should also be considered in clustering. The

4.2 Trust Cop-Kmeans Clustering Algorithm

55

trust constraint is based on the average trust degree, and we classify it into two kinds of constraints, namely must-link (ML) and cannot-link (CL). The ML constraint refers to the situation in which two DMs must be assigned to the same cluster if the associated average trust degree is sufficiently high (i.e., above a fixed threshold). The CL constraint specifies that a pair of DMs with a low average trust degree cannot be assigned to the same cluster. Let the thresholds of trust constraint be denoted as .T C and .T C, such that .0 ≤ T C ≤ 1 and.0 ≤ T C ≤ 1. If the average trust degree of a pair of DMs is less than.T C, i.e.,.atsolh < T C,then it indicates that they do not satisfy the trust constraint. In other words, .el and .eh are cannot-linked. If .atsolh > T C, it means that the DMs satisfy the trust constraint. In other words, .el and .eh are must-linked. Let .Con = (Conlh )q×q , be the constraint matrix including a set of trust constraints, which is denoted as a .q × q matrix, such that ⎧ ⎨ 1, atsolh > T C −1, atsolh < T C .Con lh = ⎩ 0, T C ≤ atsolh ≤ T C

(4.5)

Since .Con is a symmetric matrix, we only consider the trust constraints in the upper triangle region of the matrix. To visualize the trust constraints among DMs, the constraint matrix can be transformed into an undirected graph .G(E, V ), where . E = {e1 , e2 , . . . , eq } is a set of nodes, and . V = {(e1 , e2 ), (e1 , e3 ), . . . , (eq−1 , eq )} is the set of undirected edges corresponding to trust constraints. .(el , eh ) represents an undirected link from .el to .eh such that .l, h = 1, 2, . . . , q. Let .ξ be number of .Con lh = −1, satisfying that .0 ≤ ξ ≤ q(q − 1)/2. Example 4.2 (Continuing with Example 4.1). Let .T C = 0.25 and .T C = 0.7. Through examining all average trust degrees, we have that.atd12 < T C,.atd24 < T C, .atd14 > T C. That is, the cannot-link edges are .(e1 , e2 ) and .(e2 , e4 ).The following results can be calculated: (i) .Con 12 = Con 21 = 1, .Con 24 = Con 42 = 1, and (ii) all the other values of .Con lh (l /= h) are equal to 0. Therefore, the constraint matrix is obtained as ⎛ ⎞ − −1 0 1 ⎜ −1 − 0 −1 ⎟ ⎟ .Con = ⎜ ⎝ 0 −1 − 0 ⎠ −1 −1 0 − Figure 4.1 visualizes the directed and undirected social networks presented in Example 4.1 using UCINET software. It should be noted that when UCINET software is executed for visualizing social networks, the results of trust degrees are rounded up to one decimal place. Although the trust degree from.e2 to.e1 is greater than.T C (see Fig. 4.1a), i.e., .tso21 > T C, they cannot be assigned to the same subgroup because the average trust degree corresponding to them is lower than .T C (i.e., .atd12 < T C). As shown in Fig. 4.1b, cannot-link edges are marked in red, and will be the focus of subsequent clustering operations.

56

4 Trust Cop-Kmeans Clustering Method

(a) A directed social network

(b) An undirected social network Fig. 4.1 Visual social networks of four DMs in Example 4.1

Proposition 4.1 If .ξ = q(q − 1)/2, then there is no case where any two nodes (or DMs) are divided into the same subgroup. If .ξ < q(q − 1)/2, then there may be two nodes (or DMs) that can be classified into the same subgroup. Proof The equation .ξ = q(q − 1)/2 means that any two nodes (or DMs) are cannotlinked. In other words, the average trust degree of any two DMs does not satisfy the trust constraint. Therefore, there is no situation where two DMs are assigned to the same subgroup. The inequality .ξ < q(q − 1)/2 indicates that there are two DMs whose average trust degree satisfies the trust constraint. As long as their preferences are similar enough, they may be divided into the same subgroup. ∎ This completes the proof of Proposition 4.1. Definition 4.4 Let the undirected graph .G(E, V ) be the same as before, and the symbol .G ' (E ' , V ' ) be a new undirected graph. If . E ' ⊆ E and .V ' ⊆ V , then .G ' is called a subgraph of .G.

4.2 Trust Cop-Kmeans Clustering Algorithm

57

Fig. 4.2 An example of cannot-link graph in a social network

Proposition 4.2 Suppose subgraph .G ' includes .q ' nodes and .ξ ' cannot-link edges. If .ξ ' = q ' (q ' − 1)/2, then no two nodes (or DMs) are assigned to the same subgroup. Proposition 4.2 can be easily verified according to Proposition 4.1. Proposition 4.2 can be used to guide the determination of the number of clusters, which will be analyzed later. To assist in understanding the above propositions, we present three typical cannotlink cases, as depicted in Fig. 4.2. Suppose that the preferences of any two DMs are sufficiently similar. • As shown in Fig. 4.2a, we have .ξ = 2 < q(q − 1)/2. Thus, if we set . K = 2, then two subgroups can be obtained, i.e., .{e1 , e3 }, .{e2 , e4 } or .{e1 , e4 }, .{e2 , e3 }. • In Fig. 4.2b, there is a subgraph that satisfies .ξ ' = q ' (q ' − 1)/2. Thus, according to Proposition 4.2, .e1 , .e2 , and .e3 cannot be classified into the same subgroup. • Figure 4.2c displays a graph that includes .q = 4 nodes and .ξ = 6 cannot-link edges. As .ξ = q(q − 1)/2, any two DMs cannot form a subgroup. In this chapter, trust constraints are used to guide the traditional Cop-Kmeans algorithm, and thus TCop-Kmeans algorithm is produced, as presented below (labeled as Algorithm 2). . T r ust − constraint () and . pr ocess − violation() are two functions, in which .e, C, Con and .C C are formal parameters. Only when the functions are called during the execution of the TCop-Kmeans algorithm do the formal parameters get specific values. In the function . pr ocess − violation(), the third formal parameter .CC is set to .C according to . pr ocess − violation(e S , Con, C). The symbol .CC means that DM .e S cannot be assigned to cluster .C. Remark 4.1 . Pr ocess − violation is a recursive function. The last parameter is not involved when process-violation is called for the first time. It is implemented again only if the objects tobe reassigned also experience the unsuccessful assignment. Yang et al.’s experiments [9] show that when the number of cannot-link constraints is sufficiently large, the failure proportion of Cop-means algorithm will increase. Thus, we should control the number of cannot-links. On the other hand, a large

58

4 Trust Cop-Kmeans Clustering Method

number of cannot-links indicates that the overall trust level among DMs is not high, which is not conducive to decision-making. It can be concluded that trust constraint plays two functions in the clustering process: (i) one function is to assist in generating initial cluster centers, and (ii) another function is to determine whether the two DMs extracted based on similarity measure can/must be assigned to a cluster. Algorithm 2 Procedure of TCop-Kmeans algorithm Require: FPRs P l (l = 1, 2, . . . , q), designated number of clusters K ≤ q, similarity matrix S M, undirected sociomatrix U T SoM, parameters T C, T C. Ensure: Clusters C 1 , . . . , C K . 1: Calculate the constraint matrix Con. Follow these rules: (1) If atsolh < T C, then Conlh = −1; (2) if atsolh > T C, then Conlh = 1; and (3) if T C ≤ atsolh ≤ T C, then Conlh = 0. 2: Initialize K cluster centers C 1,z , . . . , C K ,z , where z is the number of iterations and is initialized as z = 0. As TCop-Kmeans algorithm is sensitive to initial cluster centers, we use CL constraints to guide the initialization. Suppose Con ' is a submatrix of Con containing q ' (2 ≤ q ' ≤ q) DMs. If the sum of all the elements in Con ' is −q ' (q ' − 1)/2, then any two of the q ' DMs are cannotlinked. Thus, they can serve as initial cluster centers. If K > q ' , the max-min method [10] is used to obtain the remaining cluster centers. 3: Calculate the similarity degree between P l and cluster center preference Hkz , and assign el to the closest center C k,z such that violate − constrains(el , C k,z , Con) is false. If no such cluster is found, fail (return {}). 4: For each cluster C k,z , update its center by averaging all of the FPRs that have been assigned to it. The new cluster center preference is denoted by Hkz+1 = (h ik,z j )m×m , such that h ik,z j =

∑ 1 pil j , ∀i, j = 1, 2, . . . , m k,z #C k,z

(4.6)

el ∈C

where #C k,z is the number of DMs in cluster C k,z . 5: Stop when C k,z+1 = C k,z , k = 1, . . . , K ; otherwise, increase z by 1, and return to Step 3. 6: Output clusters C 1 , . . . , C K . 7: End.

Violate-constraints (DM .e, cluster .C, constraint matrix .Con): (1) If .∃eh is in cluster .C, and .e and .eh are cannot-linked, then return true. (2) If .∃eh is not in cluster .C, and .e and .eh are must-linked, then return true. (3) Otherwise, return false.

4.3 Determining the Weights of Clusters and DMs Before entering the CRP, the weights of clusters and DMs should be calculated to obtain the initial group preference. Several studies used the size (namely, the number of DMs with a cluster) as the criterion for weight assignment, and stipulated that

4.3 Determining the Weights of Clusters and DMs

59

the weights of the DMs in the same cluster should be equal [1, 4, 11]. However, this ignores the internal characteristics of a cluster, such as cohesion and trust relationships. Rodríguez et al. [12] considered both the size and cohesion to reflect the weights of clusters. Moreover, it is common to use the trust relationship as a reliable source for weight allocation in the context of social networks [2, 7, 13]. Thus, we adopt the following criteria to measure the importance weights of clusters: • Size: the number of DMs in a cluster, • Cohesion: the level of cohesiveness among DMs’ preferences in a cluster, • Overall trust degree: the average trust degree a cluster receives from the whole group. The importance of a cluster is in accordance with the following three statements: • The larger the size of the cluster, the more important it is; • The more cohesive the cluster, the more important it is; • The larger the overall trust degree, the more important it is. Hence, the weight of a cluster will be based on the size,cohesion, and overall trust degree. The value of size is directly obtained from the clustering process. According to the statement of cohesion, we can use the similarity degree between DMs to measure this attribute. Definition 4.5 Given cluster .C k including .#C k DMs, the cohesion of .C k is defined as follows: ∑ ⎧ 1 sdlh , #C k > 1 #C k (#C k −1) k el ,eh ∈C ,l/=h , (4.7) .coh k = 1, #C k = 1 Clearly, .0 ≤ coh k ≤ 1. The larger the value of .coh k , the greater the cohesion of cluster .C k . The overall trust degree of a cluster indicates the average degree to which a cluster can be trusted by the whole group. Definition 4.6 Given cluster.C k and a complete trust sociomatrix.T SoM, the overall trust degree of .C k is computed as follows: otsok =

.

q ∑ ∑ 1 tsolh #C k (q − 1) k l=1,h/ =l

(4.8)

eh ∈C

Clearly, .0 ≤ otsok ≤ 1. The larger the value of .otsok , the greater the trust degree gained by cluster .C k . As mentioned, all of the criteria, namely, size, cohesion, and overall trust degree, have a positive effect on the weight allocation regarding clusters. We aggregate these three criteria to calculate the fusion value of a cluster as follows: ( φ(C ) =

.

k

#C k

∑K

k=1

#C k

)β1 ( ·

coh k

∑K

k=1

coh k

)β2 ( ·

otsok

∑K

k=1

otsok

)β3 ,

(4.9)

60

4 Trust Cop-Kmeans Clustering Method

∑3 where .βi is a weight parameter, meeting that .0 ≤ βi ≤ 1, .i = 1, 2, 3, and . i=1 βi = 1. Note 4.2 To ensure of Eq. of the three ∑ Kthe validity ∑ K(4.9), we stipulate thatk the ∑values K #C k , .cokk / k=1 cokk , and .otsok #C / k=1 otsok ) cannot items (i.e., .#C k / k=1 be equal to 0, which is in line with actual decision-making situations. It is evident that .#C k cannot be 0 for cluster .C k ; similarly, .coh k should not be 0 because the clustering is based mainly on opinion similarity. Furthermore, .otsok = 0 indicates that the trust value obtained by cluster .C k is 0. In an SNLSDM problem, the complete distrust among clusters can greatly hinder the decision-making process. The more important a criterion is for weight allocation, the smaller the value will be assigned to the associated .βi . We find that the larger the value of each item, the greater the fusion value. This is consistent with previous statements about the roles of size, cohesion, and overall trust degree in the weight assignment of clusters. Clearly, .0 ≤ φ(C k ) ≤ 1, .k = 1, 2, . . . , K . The greater the value of .φ(C k ), the greater the weight that should be assigned to cluster .C k . By normalizing the fusion value .φ(C k ), the weight of cluster .C k is obtained as φ(C k ) σ = ∑K . k k=1 φ(C )

. k

(4.10)

∑K Clearly, .0 ≤ σk ≤ 1 and . k=1 σk = 1. To better understand the calculation of weight allocation, we analyze the properties of Eq. (4.9). We define a simplified function as . f (x, y) = x y , where .x(0 < x ≤ 1) represents one of the three items in Eq. (4.9), and . y(0 ≤ y ≤ 1) corresponds to the weight parameter .βi . Figure 4.3 shows the distribution of function . f (x, y). The first derivative of . f (x, y) regarding .x is . f x' (x, y) = yx y−1 > 0, and the second derivative is. f x''x (x, y) = y(y − 1)x y−2 ≤ 0. The first derivative of. f (x, y) regarding ' y . y is . f y (x, y) = x ln x ≤ 0. We can conclude the following properties of . f (x, y): (a) If . y is fixed, the larger the value of .x, the larger the value of . f (x, y). (b) If .x is fixed, the larger the value of . y, the smaller the value of . f (x, y). (c) If setting . y /= 0 and . y /= 1, then as .x increases from 0 to 1, the slope of the curve decreases. The first property validates the previous statements about the positive effects of size, cohesion, and overall trust on weight allocation. The second property points in the direction of adjusting the importance of these three items. The third property requires that the values of all items should meet certain requirements (e.g., above a specified threshold). If the value of an item is too low, its impact on the weight allocation may be significantly reduced. As the DMs in the same cluster have similar opinions, we use the trust relationships between them to calculate their weights. The overall trust degree of .el from other DMs in cluster .C k is computed as follows:

4.3 Determining the Weights of Clusters and DMs

61

Fig. 4.3 Distribution of function . f (x, y)

otsol,k =

.

1 #C k − 1



tsolh

(4.11)

eh ∈C k ,h/=l

Clearly, .0 ≤ otsol,k ≤ 1. By using the data in Ref. [14], Fig. 4.4a presents the weight changes of the five clusters under the condition that .β1 and .β2 vary randomly in the interval .[0, 1] and .β1 + β2 + β3 = 1 is satisfied. It can be seen from Fig. 4.4b that the weight distributions of the clusters are very uneven when .β1 is close to 1. This is because the sizes of the clusters vary greatly. And when .β2 or .β3 tends to 0, the weights do not differ much (see Fig. 4.4c and d). This is because the clusters do not differ much in terms of overall trust and cohesion. By using Yager’s OWA based procedure [8, 15], the weights of the DMs in cluster k .C are obtained as follows. Definition 4.7 Given .#C k DMs in cluster .C k and the corresponding sociomatrix k . T SoM = (tsolh )#C k ×#C k . The weight of DM .el is computed as follows: ( wτ (ν),l,k = Q

.

otso(τ (ν)) otso(τ (#C k ))

)

( −Q

otso(τ (ν − 1)) otso(τ (#C k ))

) (4.12)

∑ where.otso(τ (ν)) = νh=1 otsoτ (h),k ,.τ is a permutation, such that.otsoτ (h),k is the.hth largest value of set .{otsol,k , . . . , otso#C k ,k }, and . Q is a BUM membership function of the fuzzy quantifier used to implement the mapping operation . Q : [0, 1] → [0, 1], such that . Q(0) = 0 and . Q(1) = 1, and if .x > y, then . Q(x) > Q(y). The quantifier is selected as ‘the most’. Let .wl = wτ (ν),l,k · σk be as before.

62

4 Trust Cop-Kmeans Clustering Method

(a) Weight distributions when

(b) Cross section in terms of

1

1

and

2

vary

for Fig. 4.4(a)(c) Cross section in terms of

(d) Weight distributions when

Fig. 4.4 Weights of clusters as .βi (i = 1, 2, 3) vary

3

varies

2

for Fig. 4.4(a)

4.4 Discussion and Comparative Analysis

63

By aggregating individual opinions, the opinion ∑ of cluster .C k is obtained as .h k . K σk · h k . The group opinion is represented by .π , where .π = k=1

4.4 Discussion and Comparative Analysis In this section, we further discuss the characteristics of proposed TCop-Kmeans algorithm and explore the effect of trust constraint on clustering. Section 4.4.2 analyzes the determination of trust constraint threshold. Section 4.4.3 discusses the calculation of . K .

4.4.1 TCop-Kmeans Algorithm Versus K-Means Algorithm The TCop-Kmeans algorithm enhances traditional K-means algorithms by adding trust constraints, requiring that the DMs with low trust should not be assigned to the same cluster. Therefore, we compare the TCop-Kmeans algorithm with traditional K-means algorithms to highlight the effect of trust constraints on clustering results. To ensure comparability, we preset three requirements: (i) The initial clusters are set as the FPRs of DMs .e3 , .e9 , .e13 , .e1 , which are the same as the case in Sect. 8.2; (ii) The punishment-driven consensus-reaching model proposed in Chap. 7 is adopted to manage the CRP without considering trust loss; (iii) The designated number of clusters is . K = 4. Tables 4.1 and 4.2 respectively show the comparison of clustering results and decision results obtained by using the TCop-Kmeans algorithm and traditional K-means algorithms. We first find that there are significant differences in clustering structure. As shown in Table 4.1, using traditional K-means algorithms, .e16 and .e19 are assigned to the same cluster as their preferences are sufficiently similar. However, they cannot be gathered if adopting the TCop-Kmeans algorithm because the average trust between them is low. As pre-known knowledge, trust constraints play an important role in classifying DMs.Preference similarity is central to the implementation of the TCopKmeans algorithm, but whether two individual preferences that are sufficiently similar can be grouped still requires examining the associated average trust degree. In other words, this study provides two clustering attributes, the core of which is the similarity measure, and the trust measure is an important auxiliary. As a consequence, different clustering structures lead to the differences in the size, cohesion, overall trust degree, and initial weight of a cluster (see Table 4.1). Due to the consideration of trust constraints, the performance of the overall trust degrees obtained by using TCop-Kmeans algorithm is better, but the performance in terms of cohesion is worse than that of the traditional K-means algorithms. All these are caused by the moderating effect of trust constraint on clustering. The goal of K-means algorithm is that the similarity of objects in the same cluster is greater than that of objects in different clusters. However, the trust constraint makes the

64

4 Trust Cop-Kmeans Clustering Method

Table 4.1 Clustering results with using TCop-Kmeans algorithm and traditional K-means algorithms Clustering Cluster Size Cohesion Overall trust Initial weight algorithm structure degree degree of cluster K-means

.C 1

= {.e3 , .e6 ,

6

0.671

0.5919

0.3018

0.6804

0.5547

0.239

3

0.7129

0.6282

0.1701

6

0.6862

0.5546

0.2891

6

0.653

0.5611

0.2786

4

0.5861

0.4943

0.1469

5

0.6593

0.6928

0.2865

5

0.6609

0.6804

0.285

.e7 , .e10 , .e11 , .e17 } .C 2

= {.e9 , .e14 , 5

.e16 , .e19 , .e20 } .C 3

= {.e1 , .e5 ,

.e15 } .C 4

= {.e2 , .e4 ,

.e8 , .e12 , .e13 , .e18 } TCop-Kmeans .C1 ={.e3 , .e6 , .e8 , .e11 , .e17 , .e19 } .C 2 = {.e9 , .e14 , .e16 , .e20 } .C 3 = {.e1 , .e5 , .e7 , .e10 , .e15 } .C 4 = {.e2 , .e4 , .e12 , .e13 , .e18 }

individual preferences with high similarity not necessarily classified into the same cluster, which may reduce the cohesion of the cluster. As shown in Table 4.2, the above-mentioned clustering methods result in the same number of consensus iterations and final alternative ranking. However, different clustering structures lead to the differences in the initial group consensus degree. Since K-means method has a lower initial group consensus degree, it requires a greater number of positions to be adjusted, and the group consensus cost is larger than that of the TCop-Kmeans algorithm. It is important to emphasize that the failure proportion of TCop-Kmeans increases when a greater number of pairwise trust constraints are given. Yang et al. [9] proved that as the number of constraints (including must-link and cannot-link constraints) increases, the failure proportion of Cop-Kmeans algorithm will increase. In practice, the large number of cannot-link constraints (i.e., trust constraints in this study) indicates a low level of trust among many DMs, which is considered to have a negative impact on decision-making. In this case, the organizer can suspend the decision to allow enough time for the DMs to build and enhance trust.

0.8444

0.8729

K-means

TCopKmeans

1

1

NI

13

20

NP

0.8989

0.8955

Final GCL

0.9311

1.932

GCC

0.3843

0.5448

0.5

0.4679

0.5758

0.3956

0.5878

0.5

0.5486 0.5401

0.5 ⎜ ⎜ 0.5175 ⎜ ⎜ .⎜ 0.3912 ⎜ ⎜ 0.4258 ⎝ 0.5495 ⎛ 0.5 ⎜ ⎜ 0.5321 ⎜ ⎜ .⎜ 0.4247 ⎜ ⎜ 0.4315 ⎝



⎟ 0.4122 0.6044 0.4242 ⎟ ⎟ ⎟ 0.5 0.5408 0.4034 ⎟ ⎟ 0.4592 0.5 0.4376 ⎟ ⎠ 0.5966 0.5624 0.5 ⎞ 0.5753 0.5685 0.4514 ⎟ 0.4552 0.6157 0.4599 ⎟ ⎟ ⎟ 0.5 0.5874 0.3645 ⎟ ⎟ 0.4126 0.5 0.4179 ⎟ ⎠ 0.6355 0.5821 0.5

0.4825 0.6088 0.5742 0.4505

Final group preference ⎛

.x5 > x1 > x2 > x3 > x4

.x5 > x1 > x2 > x3 > x4

Final ranking

Note NI: number of consensus iterations; NP: number of positions in the upper triangle of FPRs that should be adjusted; GCC: group consensus cost computed by the sum of individual consensus costs (see Eq. (7.4))

Initial GCL

Clustering algorithm

Table 4.2 Decision results with using the TCop-Kmeans algorithm and traditional K-means algorithms

4.4 Discussion and Comparative Analysis 65

66

4 Trust Cop-Kmeans Clustering Method

4.4.2 Determination of Trust Constraint Threshold As explained in Sect. 4.2, trust constraints play an important role in determining the initial cluster centers and cluster structures. This section analyzes the effect of trust constraint threshold on clustering results and how to determine the trust constraint threshold. We hold that the two DMs whose average trust degree is less than the trust constraint threshold cannot be assigned to the same cluster. The following logical statements are summarized (also see Fig. 4.5): • The trust constraint threshold is positively related to the number of cannot-links. If we raise the threshold, the number of cannot-links can generally increase. • The number of cannot-links is positively related to the maximum size of the formed completely undirected sub-graph.The size here refers to the maximum number of DMs contained in a completely undirected sub-graph. • The maximum size of the formed completely undirected sub-graph is positively related to the minimum number of initial clusters. The two types of trust constraints generated from the trust relationships among DMs serve as prior knowledge to regulate the clustering based on the opinion similarity. The number of CL constraints contributes to the number of clusters, whereas the number of ML constraints has a negative effect on the number of clusters formed. In this section, we analyze the effect of CL constraints on clustering. We use the data in Sect. 8.2 to simulate the clustering as the trust constraint threshold.T C increases from 0 to 1. The simulation results in Fig. 4.6 not only verify the above logical statements, but also provide an interval range for the determination of trust constraint threshold. Assume that the designated number of clusters is set as . K = 4. That is, the minimum number of initial clusters according to the number of cannot-links must not exceed 4. As shown in Fig. 4.6, the membership range of the trust constraint threshold is set to .T C ∈ [0, 0.5). In order to make full use of trust constraints to determine the

Fig. 4.5 The relationships of logical statements described in Sect. 4.4.2

4.4 Discussion and Comparative Analysis

67

200

20 Number of cannot-links Minimum number of initial clusters

18

160

16

140

14

120

12

100

10

80

8

60

6

40

4

20

2

0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Minimum number of initial clusters

Number of cannot-links

180

0

Trust constraint threshold TC

Fig. 4.6 Simulation results as the trust constraint threshold increases from 0 to 1

minimum number of initial clusters, we can set .T C = 0.25. Therefore, the minimum number of initial clusters is obtained as 3.

4.4.3 Analysis of . K Clustering is a widely used methodology for reducing the dimension of large-scale DMs and addressing the scalability challenge in LSDM problems. In the TCopKmeans algorithm, the initial number of clusters. K should be determined in advance. To better reflect the impact of . K on the decision-making, we simulate the CRP by using the punishment-driven consensus-reaching model proposed in Chap. 7 without considering the trust loss. Tables 4.3 and 4.4 show the comparative results. We find that .GC L 0 decreases as . K increases. When setting . K = 3, we have that .GC L 0 = 0.9021, which is greater than the consensus threshold. Therefore, there is no need to enter the CRP. If we set . K > 3, .GC L 0 will be less than the consensus threshold. Moreover, the higher the value of . K , the smaller the value of .GC L 0 . In this case, it requires a larger number of positions in individual preferences to be adjusted, as well as a larger group consensus cost. In addition, the final ranking of alternatives is stable with the change of . K . However, there are some differences on the final group preference. We conclude the following discussions of determining the value of . K . (1) The setting of . K should first examine whether there are trust relationships between certain DMs that can make the condition .ξ ' = q ' (q ' − 1)/2 true. In

68

4 Trust Cop-Kmeans Clustering Method

Table 4.3 Clustering structures under different values of . K using the TCop-Kmeans algorithm .K Clustering structure 3

.C

1

3 .C

4 5 6 7

.C

1

.C

3

.C

1

.C

4

.C

1

.C

4

.C

1

.C

5

= {e1 , e2 , e3 , e11 , e15 , e19 }, .C 2 = {e6 , e7 , e9 , e14 , e16 , e17 , e20 } = {e4 , e5 , e8 , e10 , e12 , e13 , e18 } = {e3 , e6 , e7 , e10 , e11 , e17 }, .C 2 = {e9 , e14 , e16 , e19 , e20 } = {e1 , e5 , e15 }, .C 4 = {e2 , e4 , e8 , e12 , e13 , e18 } = {e1 , e5 , e10 , e15 }, .C 2 = {e6 , e7 , e11 , e17 }, .C 3 = {e2 , e3 , e4 , e12 , e19 } = {e9 , e14 , e16 , e20 }, .C 5 = {e8 , e13 , e18 } = {e1 , e5 , e7 , e10 , e15 }, .C 2 = {e3 , e6 , e11 , e17 }, .C 3 = {e2 , e4 , e12 } = {e9 , e14 , e19 }, .C 5 = {e8 , e13 , e18 }, .C 6 = {e16 , e20 } = {e1 , e5 , e7 , e10 , e15 }, .C 2 = {e3 , e6 , e11 }, .C 3 = {e4 , e12 }, .C 4 = {e9 , e14 } = {e8 , e13 }, .C 6 = {e16 , e20 }, .C 7 = {e2 , e17 , e18 , e19 }

the example of Sect. 8.2, since the sub-graph meets the above condition contains at most three DMs (i.e., .e3 , .e9 , .e13 ), the minimum value of . K is set to .3. (2) Too many clusters will lead to scattered preferences and a low group consensus degree. In this case, the greater consensus cost is required for the group consensus degree to reach the consensus threshold. As presented in Table 4.4, if setting 0 . K = 7, the initial group consensus is . GC L = 0.8227, which is far from the consensus threshold.

4.5 Conclusions In order to address the clustering challenge of SNLSDM problems, we propose a TCop-Kmeans algorithm, which fully considers the influence of trust constraints on clustering. The main contributions are highlighted as follows: (1) The TCop-Kmeans algorithm is proposed, which is a semi-supervised clustering technique using the prior knowledge about trust relationships between DMs. The algorithm is mainly based on similarity measures and uses pairwise trust constraints to ensure that no two low-trust DMs are assigned to the same subgroup and that two high-trust and similar DMs must be assigned to the same subgroup. Compared with traditional clustering methods based entirely on preference similarity (such as the traditional K-means algorithms), the TCop-Kmeans algorithm can make full use of trust relationships to guide clustering in SNLSDM problems. (2) A weight-determining method for clusters is developed that combines three indices, including the size, cohesion and overall trust degree of a cluster. Multidimensional measurement attributes are taken into account in weight allocation, which makes the obtained weights of the DMs/clusters more representative and comprehensive.

0.9021

0.8729

0.8638

0.841

0.8227

3

4

5

6

7

1

1

1

1

0

NI

45

36

22

13

0

NP

0.8852

0.8892

0.8942

0.8989

0.9021

Final GCL

4.2812

2.9058

1.1136

0.9311

0

GCC

0.4478

0.5

0.3799 0.4401

0.5522 0.5

0.5783

0.47

0.4967 0.6469

0.3841 0.4432

0.4411

0.5377 0.6415

0.3748 0.4458

0.5589 0.5

0.5

0.4657 0.5867

0.5159 0.5557 0.629 0.5

⎜ ⎜ 0.5343 ⎜ ⎜ .⎜ 0.4133 ⎜ ⎜ 0.4228 ⎝ 0.5374



0.4475

0.5525 0.5

0.5

0.4673 0.5996

0.5486 0.5401 0.6355

0.5 ⎜ ⎜ 0.5327 ⎜ ⎜ .⎜ 0.4004 ⎜ ⎜ 0.4223 ⎝ 0.5472 ⎛ 0.5 ⎜ ⎜ 0.53 ⎜ ⎜ .⎜ 0.4217 ⎜ ⎜ 0.427 ⎝



Final group preference ⎛ 0.5 0.4993 0.6063 ⎜ ⎜ 0.5007 0.5 0.4131 ⎜ ⎜ .⎜ 0.3937 0.5869 0.5 ⎜ ⎜ 0.4441 0.4074 0.4715 ⎝ 0.5392 0.5569 0.5974 ⎛ 0.5 0.4679 0.5753 ⎜ ⎜ 0.5321 0.5 0.4552 ⎜ ⎜ .⎜ 0.4247 0.5448 0.5 ⎜ ⎜ 0.4315 0.3843 0.4126 ⎝ ⎞

⎟ 0.5926 0.4431 ⎟ ⎟ ⎟ 0.5285 0.4026 ⎟ ⎟ 0.5 0.4633 ⎟ ⎠ 0.5367 0.5 ⎞ 0.5685 0.4514 ⎟ 0.6157 0.4599 ⎟ ⎟ ⎟ 0.5874 0.3645 ⎟ ⎟ 0.5 0.4179 ⎟ ⎠ 0.5821 0.5 ⎞ 0.5777 0.4528 ⎟ 0.6159 0.5033 ⎟ ⎟ ⎟ 0.5568 0.3531 ⎟ ⎟ 0.5 0.4262 ⎟ ⎠ 0.5738 0.5 ⎞ 0.573 0.4841 ⎟ 0.6201 0.4443 ⎟ ⎟ ⎟ 0.5599 0.371 ⎟ ⎟ 0.5 0.4545 ⎟ ⎠ 0.5455 0.5 ⎞ 0.5772 0.4262 ⎟ 0.6252 0.4623 ⎟ ⎟ ⎟ 0.5542 0.3585 ⎟ ⎟ 0.5 0.4454 ⎟ ⎠ 0.5546 0.5

0.5559 0.4608

.x5 > x1 > x2 > x3 > x4

.x5 > x1 > x2 > x3 > x4

.x5 > x2 > x1 > x3 > x4

.x5 > x1 > x2 > x3 > x4

.x5 > x1 > x2 > x3 > x4

Final ranking

Note NI: number of consensus iterations; NP: number of positions in the upper triangle of FPRs that should be adjusted; GCC: group consensus cost computed by the sum of individual consensus costs (see Eq. (7.4))

Initial GCL

.K

Table 4.4 Decision results under different values of . K using the TCop-Kmeans algorithm

4.5 Conclusions 69

70

4 Trust Cop-Kmeans Clustering Method

References 1. Tang, M., Liao, H., Xu, J., Streimikiene, D., & Zheng, X. (2020). Adaptive consensus reaching process with hybrid strategies for large-scale group decision making. European Journal of Operational Research, 282(3), 957–971. 2. Du, Z. J., Luo, H. Y., Lin, X. D., & Yu, S. M. (2020). A trust-similarity analysis-based clustering method for large-scale group decision-making under a social network. Information Fusion, 63, 13–29. 3. Labella, Á., Liu, H., Rodríguez, R. M., & Martínez, L. (2020). A cost consensus metric for consensus reaching processes based on a comprehensive minimum cost model. European Journal of Operational Research, 281(2), 316–331. 4. Xu, X. H., Du, Z. J., & Chen, X. H. (2015). Consensus model for multi-criteria large-group emergency decision making considering non-cooperative behaviors and minority opinions. Decision Support Systems, 79, 150–160. 5. Zhong, X., Xu, X., & Yin, X. (2021). A multi-stage hybrid consensus reaching model for multiattribute large group decision-making: Integrating cardinal consensus and ordinal consensus. Computers & Industrial Engineering, 158, 107443. 6. Chiclana, F., Del Tapia García, J. M., Moral, M. J., & Herrera-Viedma, E. (2013). A statistical comparative study of different similarity measures of consensus in group decision making. Information Sciences, 221, 110–123. 7. Wu, J., Chiclana, F., & Herrera-Viedma, E. (2015). Trust based consensus model for social network in an incomplete linguistic information context. Applied Soft Computing, 35, 827– 839. 8. Yager, R. R. (1996). Quantifier guided aggregation using OWA operators. International Journal of Intelligent Systems, 11(1), 49–73. 9. Yang, Y., Tan, W., Li, T., & Ruan, D. (2012). Consensus clustering based on constrained selforganizing map and improved Cop-Kmeans ensemble in intelligent decision support systems. Knowledge-Based Systems, 32, 101–115. 10. Gonzalez, T. F. (1985). Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38, 293–306. 11. Xu, X. H., Du, Z. J., Chen, X. H., & Cai, C. G. (2019). Confidence consensus-based model for large-scale group decision making: A novel approach to managing non-cooperative behaviors. Information Sciences, 477, 410–427. 12. Rodríguez, R. M., Labella, Á., De Tré, G., & Martínez, L. (2018). A large scale consensus reaching process managing group hesitation. Knowledge-Based Systems, 159, 86–97. 13. Yu, S., Du, Z., Wang, J., Luo, H., & Lin, X. (2021). Trust and behavior analysis-based fusion method for heterogeneous multiple attribute group decision-making. Computers & Industrial Engineering, 152, 106992. 14. Yu, S. M., Du, Z. J., Zhang, X., Luo, H., & Lin, X. (2022). Trust Cop-Kmeans clustering analysis and minimum-cost consensus model considering voluntary trust loss in social network large-scale decision-making. IEEE Transactions on Fuzzy Systems, 30(7), 2634–2648. 15. Wu, J., Chiclana, F., Fujita, H., & Herrera-Viedma, E. (2017). A visual interaction consensus model for social network group decision making with trust propagation. Knowledge-Based Systems, 122, 39–50.

Chapter 5

Compatibility Distance Oriented Off-Center Clustering Method

Abstract Social network large-scale decision-making with probabilistic linguistic information is becoming a hot research topic in the field of decision science. It is an extended fusion of group decision-making at scale, social networks and fuzzy logic. Classical clustering methods are mainly based on similarity measure and cannot directly control the number of decision-makers (DMs) included in a cluster. This chapter proposes the concept of compatibility distance by combining opinion distance and trust relationship, which is used to describe the comprehensive distance measure of distance and trust between a cluster and others. Then, we develop a compatibility distance oriented off-center clustering method to classify the large group. The most important characteristic of the clustering method is that it is based on the compatibility distance and specifies the upper and lower limits on the number of DMs in a cluster. The visualization and comparative analysis of the proposed clustering algorithm are discussed. Keywords Social network large-scale decision-making (SNLSDM) · Probabilistic linguistic term set (PLTS) · Compatibility distance oriented off-center (CDOOC) · Trust relationship · Similarity degree

5.1 Preliminaries About PLTSs and Problem Configuration Linguistic assessment information is becoming one of the most commonly used forms of information expression in modeling reality decisions [1–3]. Traditional linguistic information allows only the decision-makers (DMs) to express their opinions/preferences with single linguistic terms. However, a DM may have some hesitation among multiple linguistic terms. To solve this issue, Rodríguez et al. [4] introduced the concept of hesitant fuzzy linguistic term sets (HFLTSs). In HFLTSs, all possible assessment values in a linguistic term set are assigned the same weight. And yet in practice, when judging an object, DMs may have different preferences over different linguistic terms, rather just hesitate among multiple linguistic terms. Pang et al. [5] defined the notion of probabilistic linguistic term sets (PLTSs), which could make DMs to express the possible linguistic terms with different importance © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Z. Du and S. Yu, Social Network Large-Scale Decision-Making, Uncertainty and Operations Research, https://doi.org/10.1007/978-981-99-7794-9_5

71

72

5 Compatibility Distance Oriented Off-Center Clustering Method

degrees. At present, a variety of probabilistic linguistic decision-making models and methods have been proposed and applied to a variety of decision scenarios, such as supplier selection [6–9], risk assessment [10], business investment/partnership selection [11], emergency decisions [12], and other types of decision scenarios [13, 14].

5.1.1 Probabilistic Linguistic Term Sets (PLTSs) This section reviews the knowledge about concept and operational rules of PLTSs, and describes the configuration of a social network large-scale decision-making with probabilistic linguistic information (SNLSDM-PL). The concept of PLTS is an extension of hesitant fuzzy linguistic term sets, originally proposed by Pang et al. [5], and then improved by Zhang et al. [15]. Definition 5.1 ([5]) Let . S = {sα |α = −τ, . . . , −1, 0, 1, . . . , τ } be a linguistic term set, a PLTS is defined as { .

L( p) =

L

(k)

(P

(k)

} #L( p) ∑ | (k) (k) (k) | ) L ∈ S, p ≥ 0, p ≤1

(5.1)

k=1

where . L (k) ( p (k) ) represents the linguistic term associated with the probability . p (k) , and .#L( p) is the number of all different linguistic terms in . L( p). .τ is a positive integer. The normalization of PLTSs includes the following steps [15]. (1) Probability normalization: Let the sum of the probability values of the linguistic terms in each PLTS be 1. (2) Cardinality normalization: Make the PLTSs have the same number and order of linguistic terms. For simplicity, the normalized PLTS (NPLTS) is still denoted by . L( p). Example 5.1 Let . L( p)1 = {s−1 (0.2), s1 (0.2)}, . L( p)2 = {s−1 (0.5), s2 (0.5)} be two PLTSs. The linguistic terms are drawn from . S1 = {s−2 , s−1 , s0 , s1 , s2 }. The normalized PLTSs are presented as follows: . L( p)1 = {s−1 (0.5), s1 (0.5), s2 (0)}, . L( p)2 = {s−1 (0.5), s1 (0), s2 (0.5)}. To compare two PLTSs, the scores and deviation degrees / of PLTSs are defined as ∑#L( p) (k) ∑#L( p) (k) (k) − γ¯ )2 , follows [5]: . E(L( p)) = sγ¯ ; .dev(L( p)) = 1/ k=1 p k=1 p (γ ∑#L( p) (k) (k) where .γ is the subscript of the kth linguistic term . L in . L( p), .γ¯ = (1/ k=1 ∑#L( p) p(k)) k=1 r (k) p (k) . Some PLTS operational rules are provided below. Let . L( p)1 and . L( p)2 be two NPLTSs, then we have

5.1 Preliminaries About PLTSs and Problem Configuration

73

} { | (1) . L( p)1 ⊕ L( p)2 = L k33 ( p k3 )|k3 = 1, . . . , #L( p)3 , } { | (2) .λL( p)1 = L k11 (λp1k1 )|k1 = 1, . . . , #L( p)1 . where . L k33 = L k11 , . p3k3 = p1k1 + p2k2 (k1 = 1, . . . , #L( p)1 ; k2 = 1, . . . , #L( p)2 ). Definition 5.2 [15] The distance between . L 1 ( p) and . L 2 ( p) is defined by d (L( p)1 , L( p)2 ) =

.

) ∑#L( p) ( k1 k1 γ1 | p1 − p2k2 | k=1 2τ

(5.2)

It is evident that the distance .d(L( p)1 , L( p)2 ) derived by Definition 5.2 satisfies the following properties: (1) .0 ≤ d(L( p)1 , L( p)2 ) ≤ 1, (2) .d(L( p)1 , L( p)2 ) = 0 if and only if . L( p)1 = L( p)2 , (3) .d(L( p)1 , L( p)2 ) = d(L( p)2 , L( p)1 ). To adopt PLTSs to decision-making problems, the following aggregation operator for PLTSs are defined [15]. } { Definition 5.3 Let . L( p)l = L lkl |kl = 1, 2, . . . , #L( p)l (l = 1, 2, . . . , q) be .q NPLTSs, then .

P L W A(L( p)1 , L( p)2 , . . . , L( p)q ) = w1 L( p)1 ⊕ w2 L( p)2 ⊕ · · · ⊕ wq L( p)q (5.3)

is called the probabilistic linguistic weighted averaging (PLWA) operator, where w = (w1 , w2∑ , . . . , wq )T is the weight vector of . L( p)l (l = 1, 2, . . . , q), .wl ≥ 0, l = q 1, 2, . . . , q, l=1 wl = 1.

.

5.1.2 Configuration of an SNLSDM-PL Problem Formally, the main elements found in an SNLSDM-PL problem are as follows. (1) Let . E = {e1 , e2 , . . . , eq }(q ≥ 20) be a set of DMs, where .el represents the .lwq )T be the weight vector of the DMs such that th DM. Let .w = (w1 , w2 , . . . , ∑ q T .wl ≥ 0(l = 1, 2, . . . , q) and . l=1 wl = 1. (2) Let . X = {x1 , x2 , . . . , xm } and . A = {a1 , a2 , . . . , an } be the set of alternatives and be the weight vector of the the set of attributes. Let .ω = (ω1 , ω2 , . . . , ωn )T ∑ attributes such that .ω j ≥ 0( j = 1, 2, . . . , n)T and . nj=1 ω j = 1. (3) Individual evaluation information. Let the individual opinion be .V l = (vil j )m×n , { } | ∑#vil j l(k) l(k) | l(k) l(k) L (P ) ∈ S, p ≥ 0, p ≤ 1 is a PLTS, repwhere .vil j = L l(k) k=1 i j ij ij ij ij resenting the evaluation of DM .el on alternative .xi ∈ X with respect to attribute a j ∈ A. The normalized opinion is obtained as . R l = (ril j )m×n .

.

74

5 Compatibility Distance Oriented Off-Center Clustering Method

(4) Trust relationships. The trust sociomatrix by gathering the trust degrees given by DMs, denoted as .T SoM = (tsolh )q×q , where .tsolh represents the trust degree from .el to .eh [16, 17].

5.2 Compatibility Distance Oriented Off-Center Clustering Algorithm We propose a compatibility distance oriented off-center (CDOOC) clustering algorithm in Sect. 5.2.1. In Sect. 5.2.2, the visualization of clustering algorithm is implemented. Section 5.2.3 discusses the calculation of the weights of clusters.

5.2.1 CDOOC Clustering Algorithm Clustering is an effective way to manage large-scale DMs through dividing the large group into small-scale clusters. Traditional clustering methods without considering the fuzzy logic are mainly based on the following principle: the sample data in the same cluster are more similar to each other, while sample data in different clusters are very different. This section proposes a compatibility distance oriented off-center clustering algorithm, which contains the following three characteristics: (1) According to the compatibility distance to the center, the clustering priority order is obtained, which specifies the order of DMs who are considered to assign the cluster through the compatibility measure. (2) The upper and lower limits of the number of DMs that can be included in each cluster are defined. (3) Whether multiple DMs can be divided into the same cluster depends on the maximum distance between them, rather than the compatibility distance measure between each DM and the collective opinion. Definition 5.4 Given individual opinions . R l and . R h , undirected sociomatrix .U T SoM = (utsolh )q×q obtained by Eq. (4.2), the compatibility distance between .el and .eh is defined as Com Dis(el , eh ) = ρ · d(R l , R h ) + (1 − ρ) · (1 − utsolh )

.

(5.4)

where.ρ(0 ≤ ρ ≤ 1) is the weight of.d(R l , R h ) in the calculation of.Com Dis(el , eh ). Clearly, .0 ≤ Com Dis(el , eh ) ≤ 1. The procedure of CDOOC clustering method is presented in Algorithm 3, whose main feature is that it specifies the limit of the number of DMs that can be included in each cluster. Determining the value of the limit .q = [q, q] depends on the size of the decision group and the designated number of clusters. Song and Yang [18]’s

5.2 Compatibility Distance Oriented Off-Center Clustering Algorithm

75

research shows that a group with two to five DMs is more likely to obtain a unified opinion; a group containing 4 to 5 members is easy to satisfy the members. In the following, the group includes 20 DMs, which is similar to the study of Ref. [18] on group size. Therefore, we set .q ≤ 5 and .q ≥ 2, which is conducive to the in-depth discussion among the DMs, and more importantly, to the formation of a high-consensus collective opinion. It is important to notice that if a group contains a large number of DMs, the value of .q should be increased. Algorithm 3 Procedure of compatibility distance oriented off-center clustering algorithm Require: Individual opinions V l , l = 1, 2, . . . , q, undirected sociomatrix U T SoM, number limit q = [q, q], compatibility threshold C T (0 ≤ C T ≤ 1), and initial round of clustering iteration z = 1. Ensure: Clusters C 1 , C 2 , . . . , C K . 1: Normalize the individual opinions as R l = (ril j )m×n , l = 1, 2, . . . , q. 2: Use Eq. (5.5) to calculate the compatibility distance between individual opinion R l and the collective opinion R c as ⎞ ⎛ ∑q z utsolh h=1,l/ = h l,z l c ⎠, (5.5) .Com Dis = ρ · d(R , R ) + (1 − ρ) · ⎝1 − qz z

where R c = P L W Awz (R 1 , . . . , R q ) represents the collective opinion except DM el . Let w z = (1/q z , . . . , 1/q z )T , where q z is the number of DMs that have not yet been assigned to a cluster. 3: Sort the compatibility distances obtained by Step 2 in ascending order, denoted by ,

,

Com Dis 1 ,z , Com Dis 2 ,z , . . . , Com Dis q ,

4: 5:

6: 7:

,

z,

z , ,z

. Therefore, the clustering priority order is

obtained as e1 , e2 , . . . , eq . If q z ≥ q, then proceed to next step; If q z < q, these q z DMs are suggested to exit the decision, then proceed to Step 6. , According to the clustering priority order, calculate the compatibility distance between e1 and , , , 2 1 2 z e . If the compatibility distance is less than 1 − C T , then assign e and e to cluster C . Then, , , , compute the maximum compatibility distance between e1 , e2 , and e3 . If the compatibility , 3 z distance is less than 1 − C T , then assign e to cluster C . Repeat this process until the number of DMs in C z equals to q or there is no compatibility distance that is less than 1 − C T . If , no compatibility distance related to e1 meets the threshold requirement, then calculate the , , 2 compatibility distance between e and e3 and repeat the above step. let z = z + 1, and then return to Step 2. Output the clusters C 1 , C 2 , . . . , C K . End.

Remark 5.1 After multiple clustering iterations, the number of remaining DMs may be less than .q. However, as the compatibility distances between them cannot meet the compatibility threshold, they cannot be assigned to a cluster. To deal with this issue, we provide three solutions: • The remaining DMs are suggested to adjust their own opinions and then the clustering operation is re-implemented. They are not assigned to a certain cluster because they differ significantly from the group.

76

5 Compatibility Distance Oriented Off-Center Clustering Method

• We can lower the value of .q (for example, .q = 1) until all the DMs are assigned to clusters. • These DMs are forced to withdraw from the decision. It is important to emphasize that decision on forced withdrawal must be made very carefully and with the approval of all DMs, especially those who are asked to exit. If any DM does not agree, the decision will be terminated. Another important parameter related to the clustering is clustering threshold (denoted as .C T ). If .C T is set to a number close to 1 (for example, .C T = 0.9), then the number of clusters that meet the limit .q may be small. The DMs in the same cluster can be assigned the same weight because they have similar individual opinions and good trust relationships [16, 19]. By aggregating individual opinions, the opinion of cluster .C k can be obtained as . H k = (h ikj )m×n , q where .h ikj = P L W A(ri1j , . . . , ri jk ). Remark 5.2 Since the CDOOC clustering algorithm stipulates the upper limit of the number of DMs included in a cluster, there may be situations that the distance between DMs belonging to different clusters is smaller than that between DMs belonging to the same cluster. Nevertheless, the compatibility distance between DMs in each cluster satisfies the compatibility threshold.

5.2.2 Visualization of CDOOC Clustering Algorithm Here, we use the data coming from Sect. 8.3 to illustrate the visualization of the proposed clustering algorithm. Figure 5.1 shows the visualization of forming the first cluster. The compatibility distances .d l,1 are mapped to a two-dimensional coordinate system. We can clearly see the distribution of those compatibility distances, and it is easy to find the points that deviate less (which are then circled in red). The mark ‘No.1’ in Fig. 5.1a indicates that DM .e11 has the least compatibility deviation from the group. And so on for other DMs. As shown in Fig. 5.1a, DMs.e1 ,.e2 ,.e9 ,.e11 ,.e17 are selected to form a cluster .C 1 . Then, the compatibility distances between the selected DMs are calculated. As .Com Dis(e1 , e9 ) > 1 − C T , .e9 is removed from the list of selected DMs and .e5 is added to the list according to the clustering priority order (see Fig. 5.1b). We find that the compatibility distance between any two selected DMs is less than .1 − C T (see Fig. 5.1c). Therefore, DMs .e1 , .e2 , .e5 , .e11 , .e17 can form the first cluster. Calculate the compatibility distance between the updated selected DMs. Figure 5.1d presents the compatibility distance measures between individual opinions and the collective opinion except for the DMs in cluster.C 1 . Then the process of forming the second cluster begins.

5.2 Compatibility Distance Oriented Off-Center Clustering Algorithm

77

Fig. 5.1 Visualization of forming the first cluster

5.2.3 Generation of the Weights of Clusters Most studies simply use the number of DMs in a cluster as the criterion to calculate the weight of the cluster [19, 20]. Rodríguez et al. [20] introduced the concept of cohesion into weight calculation. We consider that there are three criteria related to weight calculation: external consensus level .C L k,0 E x , size .qk , internal consensus level k,0 .C L I n . Table 5.1 describes the factors involved in calculating clusters’ weights. We can see that all the above three factors have the positive influence on the weight assignment.

Table 5.1 Factors related to calculating the weights of the clusters Factors Direction Description of action External consensus level .C L k,0 Ex Internal consensus level .C L k,0 In

Positive

Size .qk

Positive

Positive

It quantifies the similarity between a cluster opinion and others It measures the average similarity of DMs within a cluster. The higher the average similarity, the more cohesive the DMs within the cluster are The number of DMs in a cluster represents the degree of recognition of the cluster opinion

78

5 Compatibility Distance Oriented Off-Center Clustering Method

Definition 5.5 Given . K clusters’ opinions, the external consensus level of cluster C k is computed by

.

k .C L E x

K ∑ 1 =1− d(H k , H g ), K − 1 g=1,k/=g

(5.6)

where . H k and . H g are the opinions of clusters .C k and .C g respectively, and d(H k , H g ) represents the distance between. H k and. H g . Clearly,.0 ≤ C L kE x ≤ 1(k = 1, 2, . . . , K ).

.

Definition 5.6 For cluster .C k including .qk DMs, its internal consensus level is defined as qk qk ∑ ∑ 1 k d(R l , R h ), (5.7) .C L I n = 1 − qk (qk − 1) l=1 k h=1,l/=h,el ,eh ∈C

where .d(R l , R h ) is the distance between . R l and . R h . Clearly, .0 ≤ C L kI n ≤ 1(k = 1, 2, . . . , K ). Definition 5.7 Given the external consensus level .C L kE x , size .qk , and internal consensus level .C L kI n , the weight of cluster .C k is obtained by ( σ =

. k

qk q

)α2 ( )α3 )α1 ( C L kI n C L kE x · ∑K · ∑K , k k k=1 C L I n k=1 C L E x

(5.8)

where ∑3 .αi is the weight parameter, satisfying that .0 ≤ αi ≤ 1, .i = 1, 2, 3, and i=1 αi = 1.

.

Then normalize the weight .σk as σk σ = ∑K

. k

k=1

σk

(5.9)

∑K Clearly, .0 ≤ σk ≤ 1, .k = 1, 2, . . . , K , and . k=1 σk = 1.

5.3 Comparative Analysis and Discussion We discuss some important issues regarding the CDOOC clustering algorithm, including the comparison with traditional clustering methods, and the determination of some important parameters.

5.3 Comparative Analysis and Discussion

79

5.3.1 Comparison with Traditional Clustering Algorithms To ensure comparability, we set the following requirements for implementing Kmeans clustering algorithm: (i) . K = 6; (ii) the initial centers are .e11 , .e15 , .e7 , .e8 , .e3 , .e13 , which are derived from the clusters obtained by using the proposed clustering algorithm; (3) Let .ρ = 1, which means that the measure of compatibility distance is based entirely on distance measures between opinions. Table 5.2 shows the comparison results by using different clustering algorithms. We find that although the number of clusters is the same using the four algorithms, the clustering structure is quite different. Most obviously, using K-means algorithm does not guarantee that the number of DMs in each cluster belongs to the interval .[2, 5] (for example, cluster .C 1 includes 6 DMs). Although the clustering results obtained by using the fuzzy c-means algorithm and agglomerative clustering algorithm also meet the number limit requirements, they do not pursue the limit at the beginning. The CDOOC clustering algorithm strictly limits the number of DMs that can be included in each cluster from the very beginning. Because of this, there are cases where some of the most similar DMs are not grouped together. In addition, CDOOC clustering algorithm adopts a stricter similarity measure, that is, the similarity between DMs in the cluster is required to meet the clustering threshold, instead of the distance between DMs and the cluster center meeting the threshold. Consequently, the clustering structures obtained by the four clustering algorithms are different. Using K-means algorithm, DMs .e2 and .e4 are assigned to a cluster, but they belong to different clusters using CDOOC clustering algorithm. Note that the following issue should be paid attention to when using CDOOC clustering algorithm: since the algorithm sets the lower limit of the number of DMs contained in a cluster, there may be some DMs that cannot be assigned to the cluster. For instance, in the case study in Sect. 8.3, DMs .e14 and .e19 are advised to exit the decision. Asking several DMs to withdraw from a decision is an extremely deliberate decision, and Sect. 5.2 provides some suggested solutions. Table 5.3 presents the advantages and disadvantages of the four clustering algorithms. The greatest disadvantage of CDOOC clustering algorithm is that it cannot guarantee that the distance of DMs belonging to the same cluster is smaller than that

Table 5.2 Comparison results by using different clustering algorithms

Cluster structure

K-means algorithm

Fuzzy c-means algorithm

Agglomerative clustering algorithm

CDOOC clustering algorithm

{.e14 , e15 }, {.e7 , e16 }, {.e3 , e6 }, {.e13 , e18 }, {.e2 , e4 , e9 , e11 , .e17 , e20 }, {.e1 , e5 , e8 , e10 , .e12 , e19 }

{.e3 , e6 }, {.e13 , e18 }, {.e10 , e14 , e15 }, {.e7 , e12 , e16 } {.e2 , e4 , e9 , e11 , e17 }, {.e1 , e5 , e8 , e19 , e20 }

{.e7 , e16 },{.e13 , e18 }, {.e3 , e14 , e15 }, {.e4 , e9 , e11 , e20 }, {.e1 , e2 , e6 , e17 }, {.e5 , e8 , e10 , e12 , .e19 }

{.e8 , e20 }, {.e3 , e4 }, {.e13 , e18 }, {.e7 , e12 , e16 }, {.e9 , e6 , e10 , e15 }, {.e1 , e2 , e5 , e11 , e17 }

80

5 Compatibility Distance Oriented Off-Center Clustering Method

Table 5.3 Comparison of the main advantages and disadvantages of four clustering algorithms Clustering method Advantages/disadvantages K-means algorithm

Fuzzy c-means algorithm

Agglomerative clustering algorithm

CDOOC clustering algorithm

Advantage: The time complexity is almost linear, which helps to handle large amounts of data. Disadvantages: (i) Difficult to determine the value of . K (ii) Neither the upper nor the lower limit of the number of DMs contained in the cluster is taken into account Advantage: It is an extension of K-means algorithm in fuzzy logic. Disadvantages: (i) Difficult to determine the value of . K (which is similar to the K-means algorithm). (ii) The clustering result is sensitive to the initial clustering center. The upper and lower limits to the number of DMs contained in a cluster are not considered Advantages: (i) There is no need to determine the value of . K . (ii) The results of clustering are almost independent of the traversal order of the samples. Disadvantages: (i) Large time complexity and irreversible clustering process (ii) Not suitable for sample sets with non-uniform density Advantages: (i) Both the upper and lower limits for the number of DMs a cluster includes are specified. (ii) The clustering priority order is clarified. Disadvantages: (i) Some DM exit decision scenarios may occur (ii) The similarity of DMs within clusters is not necessarily greater than that between clusters

of DMs belonging to different clusters. However, it should be emphasized that the distance among DMs within a cluster meets the clustering threshold.

5.3.2 Analysis of .q The most important feature of CDOOC clustering algorithm is that it controls the upper and lower limits of the number of DMs contained in a cluster, so as to avoid some excessive situations, such as a cluster including only one DM, or a cluster containing too many DMs (such as more than 70% of the total number of DMs). Therefore, this section aims to analyze the effects of .q on the clustering results. We first build two clustering situations: (1) Only the value of .q is specified, and there is no limit to the value of .q; and (2) only the value of .q is specified, and there

5.3 Comparative Analysis and Discussion

81

20 Number of clusters under situation A Number of clusters under situation B

18 16 14

K

12 10 8 6 4 2 0

0

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20

q or q

Fig. 5.2 Number of clusters when .q varies

is no limit to the value of .q. Figure 5.2 shows the clustering results under the above two situations. The following observations can be made. (1) When .q increases from 1 to 20, the obtained number of clusters decreases from 20 to 7. This is because .q determines the maximum number of DMs that a cluster can accommodate. When .q increases, the DMs that meet the clustering threshold will be successively assigned to the same cluster. We also find that when .q ≥ 5, the number of clusters . K will be identical to 7. Since the compatibility distance between some DMs do not meet the threshold, they will not be assigned to a cluster even if .q increases. (2) When .q increases from 1 to 20, the number of clusters continues to decrease from . K = 7 to . K = 0. This is because .q determines the minimum number of DMs that a cluster must include. As shown in Fig. 5.2, when .q ≥ 6, the number of clusters that meet the clustering requirements is 0. This indicates that any six DMs cannot be assigned to a cluster because the maximum compatibility distance among them is smaller than the clustering threshold. Based on the above analysis of the impact of .q on the number of clusters, we can estimate the value ranges of .q and .q. For example, if we require that the number of clusters obtained be 6, then we can set .q = 1 and .q ≥ 5.

82

5 Compatibility Distance Oriented Off-Center Clustering Method

5.4 Conclusions This chapter explores the clustering analysis in SNLSDM-PL problems. A compatibility distance oriented off-center clustering algorithm is put forward, which can limit the number of DMs in the cluster within a fixed range. It uses the compatibility distance to the center as the basis to determine the clustering priority, and adopts a more rigorous distance measure, that is, the compatibility distances between DMs rather than the compatibility distance to the center. The extensions on the CDOOC clustering algorithm can be developed in three aspects: (1) complex information representation forms, such as preference relations, Z-number, etc.; (2) consideration of more clustering measurement attributes, such as the network topology of DMs; (3) applicability to real decision scenarios.

References 1. Deng, X., & Gao, H. (2019). TODIM method for multiple attribute decision making with 2tuple linguistic Pythagorean fuzzy information. Journal of Intelligent & Fuzzy Systems, 37(2), 1769–1780. 2. Yu, S. M., Wang, J., Wang, J. Q., & Li, L. (2018). A multi-criteria decision-making model for hotel selection with linguistic distribution assessments. Applied Soft Computing, 67, 741–755. 3. Zhang, H., Dong, Y., Xiao, J., Chiclana, F., & Herrera-Viedma, E. (2020). Personalized individual semantics-based approach for linguistic failure modes and effects analysis with incomplete preference information. IISE Transactions, 52(11), 1275–1296. 4. Rodríguez, R. M., Martínez, L., & Herrera, F. (2012). Hesitant fuzzy linguistic term sets for decision making. IEEE Transactions on Fuzzy Systems, 20(1), 109–119. 5. Pang, Q., Wang, H., & Xu, Z. (2016). Probabilistic linguistic term sets in multi-attribute group decision making. Information Sciences, 369, 128–143. 6. Dong, Y., Zheng, X., Xu, Z., Chen, W., Shi, H., & Gong, K. (2021). A novel decision-making framework based on probabilistic linguistic term set for selecting sustainable supplier considering social credit. Technological and Economic Development of Economy, 27(6), 1447–1480. 7. Lei, F., Wei, G., Gao, H., Wu, J., & Wei, C. (2020). TOPSIS method for developing supplier selection with probabilistic linguistic information. International Journal of Fuzzy Systems, 22(3), 749–759. 8. Song, Y., & Li, G. (2019). A large-scale group decision-making with incomplete multi-granular probabilistic linguistic term sets and its application in sustainable supplier selection. Journal of the Operational Research Society, 70(5), 827–841. 9. Yu, S., Du, Z., & Xu, X. (2021). Hierarchical punishment-driven consensus model for probabilistic linguistic large-group decision making with application to global supplier selection. Group Decision and Negotiation, 30(6), 1343–1372. 10. Du, Z. J., Chen, Z. X., & Yu, S. M. (2021). Commercial-risk joint analysis with probabilistic linguistic assessments. Knowledge-Based Systems, 247, 108737. 11. Mao, X. B., Wu, M., Dong, J. Y., Wan, S. P., & Jin, Z. (2019). A new method for probabilistic linguistic multi-attribute group decision making: Application to the selection of financial technologies. Applied Soft Computing, 77, 155–175. 12. Liu, Y., & Yang, Y. (2022). A probabilistic linguistic opinion dynamics method based on the DeGroot model for emergency decision-making in response to COVID-19. Computers & Industrial Engineering, 173, 108677.

References

83

13. Su, W., Luo, D., Zhang, C., & Zeng, S. (2022). Evaluation of online learning platforms based on probabilistic linguistic term sets with self-confidence multiple attribute group decision making method. Expert Systems with Applications, 208, 118153. 14. Wang, M., Liang, D., Xu, Z., & Ye, D. (2021). The evaluation of mobile health apps: A psychological perception-based probabilistic linguistic belief thermodynamic multiple attribute decision making method. Journal of the Operational Research Society, 72(11), 2596–2610. 15. Zhang, Y., Xu, Z., & Liao, H. (2017). A consensus process for group decision making with probabilistic linguistic preference relations. Information Sciences, 414, 260–275. 16. Yu, S. M., Du, Z. J., Zhang, X., Luo, H., & Lin, X. (2022). Trust Cop-Kmeans clustering analysis and minimum-cost consensus model considering voluntary trust loss in social network large-scale decision-making. IEEE Transactions on Fuzzy Systems, 30(7), 2634–2648. 17. Zhang, H., Palomares, I., Dong, Y., & Wang, W. (2018). Managing non-cooperative behaviors in consensus-based multiple attribute group decision making: An approach based on social network analysis. Knowledge-Based Systems, 162, 29–45. 18. Song, G., & Yang, H. (2000). The decision-making behavior of group decision analysis. Academic Research, 3, 48–49. 19. Xu, X. H., Du, Z. J., & Chen, X. H. (2015). Consensus model for multi-criteria large-group emergency decision making considering non-cooperative behaviors and minority opinions. Decision Support Systems, 79, 150–160. 20. Rodríguez, R. M., Labella, Á., De Tré, G., & Martínez, L. (2018). A large scale consensus reaching process managing group hesitation. Knowledge-Based Systems, 159, 86–97.

Chapter 6

Minimum-Cost Consensus Model Considering Trust Loss

Abstract The consensus-reaching process (CRP) is an effective tool for reducing differences of opinion. In general, the costs and resources associated with the CRP are limited. Therefore, the concept of minimum-cost consensus (MCC) has been proposed and used widely in various group decision-making contexts. As an important resource for influencing decision-making, trust provides a common-sense perception that the opinion of a high-trust DM is considered to be widely recognized by others. We hold that the DM with high trust but low consensus has the right to reduce the consensus cost by voluntarily losing some trust. Consequently, an improved MCC model considering trust loss is developed. Finally, we present a numerical example to illustrate the feasibility of the proposed consensus model. A comparative analysis is conducted to explore the influence of trust loss on the CRP. Keywords Consensus-reaching process (CRP) · Improved minimum-cost consensus (IMCC) · Social network large-scale decision-making (SNLSDM) · Voluntary trust loss · Consensus cost

6.1 Problem Configuration Let . E = {e1 , . . . , eq }(q ≥ 20) be a set of DMs. The DMs provide their opinions on an alternative as .(o1 , . . . , oq ), where .ol ∈ [0, 1] is a crisp number indicating the opinion given by DM .el ∈ E. Another important piece of decision information is the sociomatrix, which is obtained by gathering the trust degrees provided by DMs, denoted as .T SoM = (tsolh )q×q , where .tsolh represents the trust degree .el assigns to 1 K .eh . Suppose that Algorithm 2 is used to obtain . K clusters, denoted as .{C , . . . , C }. The initial clusters’ opinions are expressed as .(h 1 , . . . , h K ). The initial weight vector of the∑ clusters is .(σ1 , . . . , σ K ). The initial group opinion is represented by .π, where K .π = k=1 h k · σk .

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Z. Du and S. Yu, Social Network Large-Scale Decision-Making, Uncertainty and Operations Research, https://doi.org/10.1007/978-981-99-7794-9_6

85

86

6 Minimum-Cost Consensus Model Considering Trust Loss

6.2 Consensus Measure and Consensus Cost Measure Consensus measures are calculated in two ways: the first is based on the distance from the group opinion; the second is based on the distance between individual opinions [1–4]. Without loss of generality, this chapter follows the first way because it considers the weights of clusters. Definition 6.1 Given initial group opinion .π and clusters’ opinions .h k (k = 1, . . . , K ), the group consensus degree is defined as consensus(h 1 , . . . , h K ) = 1 −

.

K 1 ∑ |h k − π | K k=1

(6.1)

Clearly, .0 ≤ consensus(h 1 , . . . , h K ) ≤ 1. .α(0 ≤ α ≤ 1) is a predefined consensus threshold. If the group consensus degree is less than .α, the CRP is applied. Otherwise, we follow the selection process. Again, we emphasize that trust relationships between DMs can be regarded as a reliable resource in SNLSDM events. A DM whose trust degree meets the trust requirements can voluntarily sacrifice some trust in exchange for a reduction in its consensus cost. Thus, we propose an improved MCC model called IMCC to consider this voluntary trust loss. Definition 6.2 Let .h k denote the adjusted opinion of cluster .C k and .costk be the unit cost of moving .h k . The consensus cost of moving cluster .C k ’s opinion from .h k to .h k is defined as .CC k = costk · |h k − h k | (6.2)

6.3 Consensus-Reaching Iteration Based on Improved MCC The constraint .|ol − π | ≤ ∈ in Eq. (2.1) aims to achieve two objectives: the first is to promote the convergence of individual opinions to the group opinion, which is the basic mechanism of all CRPs; the second is to explicitly stipulate the maximum distance between individual opinions and group opinion. On the basis of Eq. (2.1), the model in Eq. (2.2) imposes stricter constraints on consensus because it requires not only that the difference between the individual opinion and group opinion not exceed .∈, but also that the group consensus degree be no less than .α. That is, the model in Eq. (2.2) has two types of consensus constraints: type-.∈ and type-.α. Once the clustering process is completed, we take cluster as the basic unit to perform the CRP. We explicitly express the opinion movement in the CRP using the following: h = δk · π + (1 − δk ) · h k ,

. k

(6.3)

6.3 Consensus-Reaching Iteration Based on Improved MCC

87

where .h k is the adjusted opinion, and .δk ∈ [0, 1] is a feedback coefficient used to describe the extent to which the opinion of cluster .C k moves to group opinion. Clearly, if .δk takes value 1, the cluster opinion is completely replaced by the group opinion, whereas if .δk equals 0, the cluster opinion remains unchanged. The larger the value of .δk , the more the cluster opinion moves to the group opinion, resulting in the higher consensus cost. Some studies relaxed the consensus constraints and considered only type-.α constraint (e.g., [1, 5]). These studies only require that the group consensus degree be no less than .α. Equation (6.3) is used as a constraint to ensure that clusters’ opinions move toward the group opinion. By introducing Eq. (6.3) into the model in Eq. (2.2) and removing the constraint .|ol − π | ≤ ∈, the classical MCC model is changed as follows: K ∑ min costk · |h k − h k | k=1

⎧ π = F(h 1 , . . . , h K ) . ⎪ ⎪ ⎨ h k = δk · π + (1 − δk ) · h k , k = 1, . . . , K s.t. ⎪ 0 ≤ δk ≤ 1, k = 1, . . . , K ⎪ ⎩ consensus(h 1 , . . . , h K ) ≥ α

(6.4)

We mark the optimal solution to the model in Eq. (6.4) as .(δ1∗ , . . . , δ ∗K ). Previous research has shown two functions of trust for the CRP: (i) assigning the weights of DMs according to trust degrees [6, 7], and (ii) detecting leaderships and promoting the opinion evolution [8]. Beyond those, we argue that trust can be used to modulate the adjustment amount of individual opinions (i.e., consensus cost). A cluster with a high trust degree means that its opinion is recognized by others. However, if the cluster contributes less to consensus, its opinion may have to move toward the group opinion. Using the MCC model in Ref. [9] or [10] to deal with opinion adjustments can yield the lowest overall consensus cost, but the consensus costs of some individuals may be too large to be accepted by the clusters. This study holds that a high-trust cluster has the right to voluntarily sacrifice partial trust to avoid large-scale adjustments of its individual opinion. Moreover, the cluster suffers a loss of trust, which is regarded as another cost of the CRP that we call the voluntary trust loss cost, which is defined as follows: (6.5) . T LC k = otsok − otsok , where .otsok is the updated trust degree of cluster .C k . Clearly, .0 ≤ T C L k ≤ 1, .k = 1, . . . , K . By including the trust loss cost, the feedback formula in Eq. (6.3) is adjusted as follows: (6.6) .h k = δ k · π + (1 − δ k ) · h k , where .δ k (0 ≤ δ k ≤ δk∗ ) is the updated feedback coefficient of cluster .C k and is denoted as .δ k = g(T LCk , δk∗ ), where .g(T LCk , δk∗ ) is a function of .T LCk and .δk∗ that indicates the adjustment force of trust loss cost to reduce the feedback coeffi-

88

6 Minimum-Cost Consensus Model Considering Trust Loss

cient. The specific expression of .g(T LCk , δk∗ ) is based on the following principles: (i) Let .0 ≤ g(T LCk , δk∗ ) < δk∗ to ensure the effectiveness of trust loss in reducing the feedback coefficient. (ii) The larger the trust loss .T LCk , the smaller the value of .g(T LCk , δk∗ ) should be. Without loss of generality, we set .g(T LCk , δk∗ ) = (1 − (T LCk )γ )δk∗ , where .γ ≥ 0 represents the adjustment intensity of .T LCk for .δk∗ . The smaller the value of .γ , the higher the adjustment intensity. The trust loss should be capped. If a clusters trust degree is low (close to or equal to zero), its opinion is considered not to be recognized by others. Thus, a trust loss threshold .ξ is defined to constrain all adjusted trust degrees such that they must not be less than the threshold., i.e., .otsok ≥ ξ , .k = 1, 2, . . . , K . Proposition 6.1 Equation (6.6) can promote the improvement of individual consensus and reduce individual consensus cost. Proof Du et al. [1] proved that the use of Eq. (6.3) could improve individual consensus. As.δ k is the updated feedback coefficient meeting that.δ k ∈ [0, 1], using Eq. (6.6) can also promote the increase of individual consensus. Let .otsok ≥ ξ to ensure that , the trust loss is valid. Let .h k and .h k be the adjusted opinions by using Eqs. (6.6) and (6.3), respectively. Then, we have the following: ,

costk · |h k − h k | − costk · |h k − h k | = costk · |h k − δ k · π − (1 − δ k ) · h k | − costk · |h k − δk∗ · π − (1 − δk∗ ) · h k | = costk · |δ k · h k − δ k · π | − costk · |δk∗ · h k − δk∗ · π | . = costk · δ k |h k − π | − costk · δk∗ |h k − π | = costk · (δ k − δk∗ ) · |h k − π | ≤ 0. □

This completes the proof of Proposition 6.1. .

Without loss of generality, we use the WA operator to calculate the group opinion, by which the aggregation function. F is materialized. We choose the distance to group opinion to compute consensus measures. We stipulate that a cluster’s voluntary trust loss does not affect its weight in the decision-making event but may be adversely affected in the next new decision problem. Therefore, the model in Eq. (6.4) can be specified as follows: min

K ∑

costk · |h k − h k |

k=1

⎧ ∑K h k σk π = k=1 ⎪ ⎪ ∑ . ⎪ K ⎪ ⎨ π = k=1 h k σk 1 ∑K s.t. k=1 |h k − π | ≤ γ K ⎪ ⎪ ⎪ h = δk · π + (1 − δk ) · h k , k = 1, . . . , K k ⎪ ⎩ 0 ≤ δk ≤ 1, k = 1, . . . , K

(a) (b) (c) (d) (e)

(6.7)

6.3 Consensus-Reaching Iteration Based on Improved MCC

89

where .γ = 1 − α. Constraints (a) and (b) are used to calculate the adjusted and original group opinions respectively; constraints(c) specify the minimum group consensus; constraints (d) and (e) are the introduced explicit adjustment paths that provide the opinion adjustment rule and the constraints of feedback coefficients. On the basis of .δk∗ and .otsok , we select the following types of clusters that are associated with trust loss: (a) .0 < δk∗ ≤ 1 (i) .otsok > ξ . .δk∗ > 0 shows that cluster .C k has an incentive to reduce its individual consensus cost, whereas .otsok > ξ means that cluster .C k can sacrifice its trust degree in charge of the reduction of consensus cost because its trust degree exceeds the trust loss threshold. We mark this type of ∗ ∗ cluster as .C k , and the cluster set is denoted as .C ∗ = {C k |0 < δk∗∗ ≤ 1 ∧ otsok ∗ > ξ }. (ii) .otsok ≤ ξ . This means that the trust degree of cluster fails to exceed the trust loss threshold. Consequently, its opinion will be adjusted according ∗∗ to the feedback coefficient .δk∗ . We mark this type of cluster as .C k , and ∗∗ the corresponding cluster set is denoted as .C ∗∗ = {C k |0 < δk∗∗∗ ≤ 1 ∧ otsok ∗∗ ≤ ξ }. (b) .δk∗ = 0. This indicates that there is no need to adjust the feedback coefficients of ∗∗∗ the related clusters. We mark this type of cluster as .C k , and the corresponding ∗∗∗ cluster set is denoted as .C ∗∗∗ = {C k |δk∗∗∗∗ = 0}. It is evident that . E = C ∗ ∪ C ∗∗ ∪ C ∗∗∗ , .C ∗ ∩ C ∗∗ = ∅, .C ∗ ∩ C ∗∗∗ = ∅, and .C ∩ C ∗∗∗ = ∅. Algorithm 4 presents the procedure of the proposed IMCC model involving voluntary trust loss. In same cases, the feedback coefficient of any cluster belonging to .C ∗∗ is raised to 1 (i.e., .δ k ∗∗ = 1) after using Eq. (6.8), but the group consensus still fails to meet the consensus threshold. This indicates that some clusters in .C ∗ excessively reduce their own consensus costs. In this case, we define a parameter, .η(η > 1), to increase the feedback coefficients of all clusters belonging to .C ∗ by the same proportion. We set up the following programming with the goal of minimizing .η: ∗∗

min η ⎧ ∑K ⎪ ⎪ π = ∑k=1 h k σk ⎪ K ⎪ ⎪ π = k=1 h k σk ⎪ ⎪ ∑K ⎪ 1 ⎪ |h ⎨ K k=1 k − π | ≤ γ . ∗ ∗ s.t. h k ∗ = ηδ k ∗ · π + (1 − ηδ k ∗ ) · h ∗k ∗ , C k ∈ C ∗ ⎪ ∗ ∗∗ ⎪ ⎪ h k ∗∗ = 1 · π + (1 − 1) · h ∗k ∗∗ , C k ∈ C ∗∗ ⎪ ⎪ ∗ ∗∗∗ ⎪ ⎪ h ∗∗∗ = δk ∗∗∗ · π + (1 − δk ∗∗∗ ) · h ∗k ∗∗∗ , C k ∈ C ∗∗∗ ⎪ ⎪ ⎩ k 0 ≤ δk ≤ 1, k = 1, . . . , K

(6.9)

90

6 Minimum-Cost Consensus Model Considering Trust Loss

Algorithm 4 IMCC model considering voluntary trust loss Require: Initial clusters’ opinions h k , k = 1, . . . , K , overall trust degree otsok , weights of clusters σk , trust loss threshold ξ , consensus threshold α. Ensure: Final consensus cost and trust loss cost. 1: Use Eq. (6.7) to obtain the optimal solution to the feedback coefficients as δ1∗ , . . . , δ ∗K . 2: Divide clusters into three types and place them in the sets C ∗ , C ∗∗ , and C ∗∗∗ , respectively. ∗ 3: Cluster C k provides the trust loss cost T LCk ∗ , which satisfies the condition that otsok ≥ ξ . ∗∗ k 4: Cluster C is required to increase its feedback coefficient. Calculate the final consensus cost and trust loss cost using Eq. (6.8), with minimizing the overall consensus cost of C ∗∗ . min

K ∑

costk ∗∗ · |h k ∗∗ − h k ∗∗ |

k=1

⎧ ∑K ⎪ π = k=1 h k σk ⎪ ∑K ⎪ ⎪ ⎪ π = h σ ⎪ ⎪ ∑ K k=1 k k ⎪ ⎪ |h k − π | ≤ γ ⎨ K1 k=1 ∗ ∗ s.t. h k ∗ = δ k ∗ · π + (1 − δ k ∗ ) · h ∗k ∗ , C k ∈ C ∗ ⎪ ∗∗ ∗ ⎪ ⎪ ⎪ h k ∗∗ = δ k ∗∗ · π + (1 − δ k ∗∗ ) · h ∗k ∗∗ , C k ∈ C ∗∗ ⎪ ∗∗∗ ⎪ ∗ ∗ ⎪ ⎪ h ∗∗∗ = δk ∗∗∗ · π + (1 − δk ∗∗∗ ) · h k ∗∗∗ , C k ∈ C ∗∗∗ ⎪ ⎩ k 0 ≤ δk ≤ 1, k = 1, . . . , K

(6.8)

where δ k ∗ = g(T LCk ∗ , δk∗∗ ). 5: Output the final consensus cost and trust loss cost. 6: End.

Algorithm 5 presents the procedure of solving an SNLSDM problem considering trust constraints and voluntary trust loss. Algorithm 5 Procedure of solving an SNLSDM problem using TCop-Kmeans clustering algorithm and IMCC model Require: Initial individual opinions ol (l = 1, 2, . . . , q), complete sociometrix T SoM, number of clusters K , consensus threshold α, trust loss threshold ξ , parameters ζ1 , ζ2 . Ensure: The final evaluation value of the alternative. 1: Use trust-similarity analysis to obtain the similarity matrix S M and undirected sociometrix U T M. 2: Apply Algorithm 2 to classify the DMs into K clusters, denoted as {C 1 , . . . , C K }. 3: Determine the weights of clusters and DMs via Eqs. (4.11) and (4.9), respectively. 4: Obtain initial clusters’ opinions as h k (k = 1, 2, . . . , K ), and group opinion π by using the WA operator. 5: Compute the group consensus degree by Eq. (6.1). If consensus(h 1 , . . . , h K ) ≥ α, then proceed to Step 8; otherwise, go to the next step. 6: Apply Algorithm 4 to guide the opinion adjustments and obtain the final opinions of clusters and the group. 7: Output the final consensus cost and trust loss cost. 8: Output the final evaluation value of the alternative. 9: End.

6.4 Numerical Experiment

91

The classical selection process for solving group decision-making problems consists of two processes [11, 12]: (1) Aggregation process: It fuses individual opinions into a collective one by means of an aggregation function. (2) Exploitation process: It selects the best alternative(s) from the result obtained in the previous phase. The decision event in this chapter involves only one alternative, and the decision purpose is to obtain the evaluation value of the alternative that meets the consensus threshold. The group opinion is obtained by using the IMCC model. Therefore, the output of the selection process is the final evaluation value of the alternative.

6.4 Numerical Experiment Here, following the illustrative example in Sect. 4.4, the clustering results have been obtained (see Table 4.1). The initial clusters’ opinions are obtained as (0.309, 0.4625, 0.42, 0.26, 0.6174), and the initial group opinion is obtained as .π = 0.4142. Algorithm 4 is applied to manage the opinion adjustments. The optimal solution by Eq. (6.4) is obtained as .(δ1∗ , . . . , δ5∗ ) = (0.5682, 0.5933, 0, 0.7161, 0.5514). We have three types of cluster sets: .C ∗ = {C 1 , C 2 }, .C ∗∗ = {C 4 , C 5 }, and .C ∗∗∗ = {C 3 }. Because the overall trust degrees of clusters .C 1 and .C 2 are greater than the trust loss threshold, these clusters decide to reduce their respective consensus costs by losing some of their trust degrees. They reduce their own trust√degrees to .otso1 = 0.53 and ∗ .otso2 = 0.53, respectively. Let . g(T LC k , δk ) = (1 − otsok − otsok )δk∗ , .k = 1, 2. The consensus result is presented in Table 6.1. The final group opinion is obtained as .π = 0.42 with the acceptable consensus being .consensus(h 1 , . . . , h 5 ) = 0.96. The total consensus cost is 0.3109.

Table 6.1 Consensus result k .h k .h k .C 1 .C .C

2

.C

3

.C

4

.C

5

0.309 0.4625 0.42 0.26 0.6174

0.3675 0.4353 0.42 0.3733 0.5055



.δk

.δ k

.otsok

.otsok

.CC k

0.5682 0.5933 0 0.7161 0.5514

0.3473 0.3747 – 0.8694 0.6015

0.5512 0.5357 0.5942 0.5229 0.4321

0.53 0.53 0.5942 0.5229 0.4321

0.0585 0.0272 0 0.1133 0.1119

92

6 Minimum-Cost Consensus Model Considering Trust Loss

6.5 IMCC Model Versus Different MCC Models This section compares the IMCC model with two typical MCC models: that of Labella et al. [10] and Zhang et al. [9]. Zhang et al. [9] took the distance from each DM to the group opinion as the only consensus measure, and used the constraint .|h k − π | ≤ ∈ to specify the direction and extent of opinion adjustment. We call this MCC type-.∈ MCC. Labella et al. [10] considered a more comprehensive consensus model adding another consensus measure (i.e., a minimum agreement among DMs, .consensus(h 1 , . . . , h K ) ≥ α) to the model of Zhang et al. [9]. We call the MCC of Labella et al. [10] type-.∈-and-type-.α MCC. Unlike the previous models, we argue that in some decisions, DMs care only about whether the consensus between DMs meets the requirements, but they do not make the same requirements for the consensus between individual opinions and group opinion. Therefore, this study only adopts the type-.α consensus constraint, and uses Eq. (6.6) to specify the adjustment direction. We use the example in Labella et al. [10] to illustrate differences among these MCC models. Example 6.1 Consider a GDM problem with five DMs . E = {e1 , . . . , e5 }. They provide their assessments over an alternative as .(o1 , . . . , o5 ) = (0, 0.09, 0.36, 0.45, 1). The weights of DMs are set as .σ = (0.375, 0.1875, 0.25, 0.0625, 0.125). The unit consensus costs are .(cost1 , . . . , cost5 ) = (6, 3, 4, 1, 2). We use different types of constraints (i.e., type-.∈ [9], type-.α (this chapter), and both [10]). The decision results are shown in Tables 6.2 and 6.3. We set .α = 0.95 and .∈ = 0.12 to ensure that the constraints of type-.∈ and type-.α are both activated in the CRP. We make the following observations and analysis: (1) Using different MCC models leads to different results for adjusted opinions. These models contain different types of consensus constraints. The type-.∈-and-

Table 6.2 Comparison of optimal solutions using different types of MCC models Type .o1 .o2 .o3 .o4 .o5 .π Type-.∈ [9] Type-.∈-and-type-.α [10] Type-.α

0.01 0.023 0.137

0.09 0.09 0.202

0.25 0.173 0.26

0.25 0.158 0.26

0.25 0.09 0.26

TCC

0.13 0.09 0.203

2.2 3 3.226

Note .ol is the adjusted opinion of .ol , l = 1, . . . , 5. .π is the obtained group opinion based on the adjusted opinions. The symbol ‘TCC’ stands for total consensus cost Table 6.3 Comparison of feedback coefficient using different types of MCC models ∗ ∗ ∗ ∗ ∗ .δ1 .δ2 .δ3 .δ4 .δ5 Type of consensus constraint Type-.∈ [9] Type-.∈-and-type-.α [10] Type-.α

0.038 0.09 0.525

0 0 0.659

1.1 1.87 1

1.053 1.537 1

1.014 1.23 1

6.5 IMCC Model Versus Different MCC Models

93

type-.α MCC model [10] adds another type of constraint to type-.∈ MCC model [9], and thus has the strictest consensus constraint. (2) If multiple rounds of consensus iterations are integrated into one iteration, using the models in Labella et al. [10] and Zhang et al. [9] can result in some feedback coefficients greater than 1, as shown in Table 6.3. This indicates that some clusters’ opinions are over-adjusted, even though these adjustments result in the minimum total consensus cost. Our model aims to improve the group consensus through only one iteration (using Eq. (6.7)) to ensure that all feedback coefficients are within the interval [0,1]. This means that no cluster opinion will be over-adjusted. However, doing so will lead to an increase in the total consensus cost. As shown in Table 6.2, type-.α MCC model has the largest total consensus cost. To better illustrate the differences in these MCC models, Fig. 6.1 visualizes the simulation results of total consensus costs obtained using different types of MCC models when setting different values of .∈ and .α. We take the type-.α MCC model as

(a) Type- -type- -based MCC model

(c) Type-

(b) Type- -based MCC model

MCC model (this chapter)

Fig. 6.1 Simulation results of total consensus costs obtained using different types of MCC models when setting different values of .∈ and .α

94

6 Minimum-Cost Consensus Model Considering Trust Loss

an example. It is bound only by .α, not by .∈. Therefore, as long as the value of .α is fixed, no matter how .∈ changes, the total consensus cost stays the same. This result is reflected in Fig. 6.1c; the graph of the total consensus cost always remains parallel to the .∈ axis.

6.6 Analysis of the Effect of Voluntary Trust Loss on the CRP Suppose the optimal solution has been obtained by using the type-.α MCC model, without considering voluntary trust loss (i.e., Eq. (6.7)). At this point, the minimum total consensus cost is obtained. Therefore, when introducing the effect of trust loss, the newly acquired total consensus cost will not be less than the result obtained by Eq. (6.7). This chapter defines the trust loss threshold .ξ to determine which clusters can implement the trust losses, and which clusters are forced to increase their own feedback coefficients. That is, we identify which clusters belong to set ∗ ∗∗ .C and which clusters belong to set .C . To highlight the comparison, we set .δk ∗ = ∗ (1 − (otsok ∗ − otsok ∗ )1/20 )δk ∗ and .otsok ∗ = ξ , where .C k ∈ C ∗ . Note that in order to enhance the effect of trust loss on the feedback coefficient, and to highlight the comparative observations, we set .γ = 1/20 instead of .γ = 1/2 as suggested above. The simulation results when .ξ varies in the interval [0.4,0.6] is shown in Fig. 6.2. Because the largest overall trust degree is 0.5942 and the smallest one is 0.4321, we set the membership interval of .ξ as .[0.4, 0.6]. We find that when .ξ is set to different values, some clusters are judged to belong to different sets. For example, if .ξ = 0.5, we have .C 4 ∈ C ∗ ; if setting .ξ = 0.55, however, that becomes .C 4 ∈ C ∗∗ . The set allocation of a cluster directly determines whether the cluster has the right to sacrifice its trust in exchange for a reduction in consensus cost. Note that no matter what happens to .ξ , we always have .C 3 ∈ C ∗∗∗ because the feedback coefficient of cluster .C 3 is always equal to 0. This indicates that cluster .C 3 does not need to participate in the process of reducing consensus cost in exchange for the trust loss, even though it has the greatest overall trust degree. The simulation results of some parameters related to the CRP when .ξ varies is depicted in Fig. 6.3. We make the following observations: (1) When .ξ is set at different values, some individuals’ consensus costs fluctuate significantly. For instance, if .ξ = 0.4, the consensus cost of cluster .C 5 is 0.1565; if .ξ = 0.45, the consensus cost increases to 0.2032. This is because when .ξ = 0.45, cluster .C 5 is required to adjust its opinion with a larger feedback coefficient (actually reaching 1, as shown in Fig. 6.3b). Clusters .C 1 , .C 2 , and .C 4 reduce their respective consensus costs through voluntary trust losses, resulting in .δ 5 = 1. Compared with setting .ξ = 0.4, their feedback coefficients decrease significantly, and their individual consensus costs are lower as a result

0.45

0.5

0.55

0.6

(a) C*** C** C* 0.4

0.45

0.5

0.55

0.6

(c) C*** C** C* 0.4

0.45

0.5

0.55

0.6

The set to which a cluster belongs

0.4

The set to which a cluster belongs

C*** C** C*

The set to which a cluster belongs

The set to which a cluster belongs

The set to which a cluster belongs

The set to which a cluster belongs

6.6 Analysis of the Effect of Voluntary Trust Loss on the CRP

95

C*** C** C* 0.4

0.45

0.55

0.6

0.55

0.6

0.55

0.6

(b) C*** C** C* 0.4

0.45

0.5

(d) C*** C** C* 0.4

0.45

0.5

(f)

(e) C1

0.5

C2

C3

C4

C5

Fig. 6.2 Result of which set a cluster belongs to when .ξ varies

(see Fig. 6.3a). This shows that the IMCC model achieves the goal of reducing the consensus costs of some individuals through the trust losses. (2) If multiple clusters have trust degrees greater than .ξ , and they all choose to implement a loss of trust (see Fig. 6.3c), it is likely that a solution for Eq. (6.8) that meets the consensus threshold will not be available. In this case, Eq. (6.9) is used to calculate the parameter .η to increase some feedback coefficients. With the increase of .ξ , the number of clusters greater than .ξ decreases, and thus .η decreases. On the basis of these discussions, the advantages and limitations of the proposed IMCC model can be summarized as follows. The most significant feature and advantage of IMCC model is that it introduces trust loss into classical MCC models and enhances the flexibility of opinion adjustments. DMs (or clusters) can voluntarily sacrifice some or all of their own trust degrees in exchange for a reduction in their consensus costs. The IMCC model also has some limitations. Several parameters and functions used in the model, including consensus threshold .α, trust loss threshold .ξ , and updated feedback coefficient .δ k , should be set in advance. There have been many achievements in research on setting the consensus threshold [5, 11, 13]. This chapter provides

96

6 Minimum-Cost Consensus Model Considering Trust Loss

Fig. 6.3 Results of some parameters related to the CRP when .ξ varies

the simulation results of consensus costs, updated feedback coefficients, and trust losses when .ξ changes. In addition, the reduction of individual consensus costs may lead to an increase in the overall consensus cost.

6.7 Conclusions To deal with the difference of opinion in the SNLSDM problem, we propose an IMCC model that takes into account voluntary trust loss. The model states that a cluster with high trust but low consensus can sacrifice some of its own trust to reduce its consensus cost. The following limitations need to be further addressed. Some individuals reduce their own consensus costs through voluntary trust losses, but doing so increases the total consensus cost. Balancing the consensus cost and trust loss between individuals and the group is an important research issue. Furthermore, the unit consensus cost affects the measurement of individual consensus cost [14], and this influence will be transmitted to the individual’s decision about trust loss. Therefore, it is meaningful to study the impact of unit consensus cost on the IMCC model and the determination of unit cost.

References

97

References 1. Du, Z. J., Yu, S. M., & Xu, X. H. (2020). Managing noncooperative behaviors in large-scale group decision-making: Integration of independent and supervised consensus-reaching models. Information Sciences, 531, 119–138. 2. Palomares, I., Martínez, L., & Herrera, F. (2014). A consensus model to detect and manage noncooperative behaviors in large-scale group decision making. IEEE Transactions on Fuzzy Systems, 22(3), 516–530. 3. Wu, Z., & Xu, J. (2016). Managing consistency and consensus in group decision making with hesitant fuzzy linguistic preference relations. Omega, 65, 28–40. 4. Xu, X. H., Du, Z. J., Chen, X. H., & Cai, C. G. (2019). Confidence consensus-based model for large-scale group decision making: A novel approach to managing non-cooperative behaviors. Information Sciences, 477, 410–427. 5. Xu, X. H., Du, Z. J., & Chen, X. H. (2015). Consensus model for multi-criteria large-group emergency decision making considering non-cooperative behaviors and minority opinions. Decision Support Systems, 79, 150–160. 6. Wu, J., Chiclana, F., Fujita, H., & Herrera-Viedma, E. (2017). A visual interaction consensus model for social network group decision making with trust propagation. Knowledge-Based Systems, 122, 39–50. 7. Wu, J., Chiclana, F., & Herrera-Viedma, E. (2015). Trust based consensus model for social network in an incomplete linguistic information context. Applied Soft Computing, 35, 827– 839. 8. Dong, Y., Ding, Z., Martínez, L., & Herrera, F. (2017). Managing consensus based on leadership in opinion dynamics. Information Sciences, 397, 187–205. 9. Zhang, G., Dong, Y., Xu, Y., & Li, H. (2011). Minimum-cost consensus models under aggregation operators. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 41(6), 1253–1261. 10. Labella, Á., Liu, H., Rodríguez, R. M., & Martínez, L. (2020). A cost consensus metric for consensus reaching processes based on a comprehensive minimum cost model. European Journal of Operational Research, 281(2), 316–331. 11. Tang, M., Liao, H., Xu, J., Streimikiene, D., & Zheng, X. (2020). Adaptive consensus reaching process with hybrid strategies for large-scale group decision making. European Journal of Operational Research, 282(3), 957–971. 12. Yu, S. M., Du, Z. J., Lin, X. D., Luo, H. Y., & Wang, J. Q. (2020). A stochastic dominancebased approach for hotel selection under probabilistic linguistic environment. Mathematics, 8(9), 1525. 13. Chao, X., Dong, Y., Kou, G., & Peng, Y. (2022). How to determine the consensus threshold in group decision making: A method based on efficiency benchmark using benefit and cost insight. Annals of Operations Research, 316(1), 143–177. 14. Xu, W., Chen, X., Dong, Y., & Chiclana, F. (2021). Impact of decision rules and non-cooperative behaviors on minimum consensus cost in group decision making. Group Decision and Negotiation, 30(6), 1239–1260.

Chapter 7

Punishment-Driven Consensus-Reaching Model Considering Trust Loss

Abstract This chapter proposes a punishment-driven consensus-reaching model for social network large-scale decision-making (SNLSDM) problems. The model identifies four categories of consensus scenarios (namely highhigh, highlow, lowhigh, lowlow) through consensus metric and trust metric. Different adjustment strategies are established, and the moderating effect of trust loss on consensus-reaching and consensus cost is investigated. Keywords Social network large-scale decision-making (SNLSDM) · Punishment-driven consensus-reaching model (PDCRM) · Trust loss · Fuzzy preference relation (FPR) · Punishment coefficient

7.1 Problem Configuration SNLSDM is defined as a decision situation where a large number of DMs connected via a social network seek to achieve a common solution from a set of alternatives. Formally, an SNLSDM problem consists of (i) a set of alternatives, . X = {x 1 , x 2 , . . . , x m }, (.m ≥ 2), which present possible solutions to the problem; (ii) a set of decision-makers (DMs), . E = {e1 , e2 , . . . , eq }, (.q ≥ 20), who express their preferences for alternatives. DMs need to provide two pieces of decision-making information, including individual preferences, and the statements of trust relationships. • Let . P l = ( pil j ) be a fuzzy preference relation (FPR) given by DM .el ∈ E, where l . pi j ∈ [0, 1] indicates the preference degree of . x i over . x j , such that . pi j + p ji = 1, l . pii =0.5, for all .i, j = 1, 2, . . . , m. If . pi j > 0.5, then it denotes that . x i is preferred l to .x j ; the larger . pi j , the greater the preference degree of .xi over .x j ; if . pil j = 0.5, it represents that there is no difference between .xi and .x j . An FPR can be considered efficient if the number of alternatives is small [1]. In this chapter, we use five alternatives in the illustrative example (see Sects. 7.5 and 8.2).

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Z. Du and S. Yu, Social Network Large-Scale Decision-Making, Uncertainty and Operations Research, https://doi.org/10.1007/978-981-99-7794-9_7

99

100

7 Punishment-Driven Consensus-Reaching Model Considering Trust Loss

• Another important decision information is the trust sociomatrix by gathering the trust degrees provided by DMs, denoted as .T SoM = (tsolh )q×q , where .tsolh represents the trust degree from .el to .eh .

7.2 Computing the Consensus Degree Suppose . H k,t = (h ik,tj )m×m is the preference of cluster .C k (k = 1, 2, . . . , K ), and G,t .P = ( piG,t j )m×m is the group preference in the .t-th iteration. Note that the initial opinions of clusters are obtained by using Algorithm 2 with considering only CL constraints (see Sect. 8.2.2). The consensus measures can be calculated at three levels: Level 1. Individual consensus degree for the pair of alternatives .(xi , x j ): .

I C L(h ik,tj ) = 1 − |h ik,tj − piG,t j |

(7.1)

Level 2. Individual consensus degree for the preference . H k,t : ∑m−1 ∑m .

I C L(H

k,t

)=

i=1

j=i+1

I C L(h ik,tj )

m(m − 1)/2

(7.2)

Level 3. Group consensus degree:

.

GC L t =

K 1 ∑ I C L(H k,t ) K k=1

(7.3)

Clearly, .0 ≤ GC L t ≤ 1. Usually, a consensus threshold .GC L should be set in advance. If .GC L t ≥ GC L, it means that the current group consensus is sufficiently high, and the selection process can be followed; otherwise, the CRP is implemented. Calculating the consensus threshold is an important matter, which has been discussed by many studies [2, 3]. The consensus degree at Level 2 is used to identify the cluster that contributes less to the consensus, while the consensus degree at Level 1 is designed to find out the low-consensus positions in the FPRs of the identified cluster. The aforementioned three-level consensus will then be used to generate adjustment strategies.

7.3 Logic for Solving CRP Using Trust Loss Previous studies show that trust performs three functions for the CRP: (i) assigning the weights of DMs according to trust relationships [4, 5], (ii) identifying leaders among DMs and facilitating the evolution of opinions [6], and (iii) generating adjustment suggestions [7]. However, almost all of these studies failed to analyze

7.3 Logic for Solving CRP Using Trust Loss

Individual FPRs

101

Feedback recommendations Low

No

Computing consensus degrees

Consensus control Yes

Selection process

No

Trust degree H/L?

Updating overall trust degrees

High Computing trust losses

Fig. 7.1 The logical diagram of the moderating effect of trust loss on the CRP

the relationship between trust loss and consensus cost. We consider that a high-trust but low-consensus cluster can reduce the consensus cost by actively losing part of its trust degree. This section designs a punishment-driven consensus mechanism that uses the trust loss to moderate the CRP. Inspired by the definition of linear consensus cost, we define the consensus cost of moving cluster’s preference from . H k,t to k,t+1 .H , as follows: ) ( k,t+1 .CC = costk · d H k,t , H k,t+1 (7.4) where.costk is the unit consensus cost of cluster.C k and.d(H k,t , H k,t+1 ) is the distance between . H k,t and . H k,t+1 , such that (

d H k,t , H

.

) k,t+1

⎞1/2 m−1 m ∑ ∑ 1 k,t k,t+1 |h − h i j |⎠ =⎝ m(m − 1)/2 i=1 j=i+1 i j ⎛

(7.5)

Clearly, .0 ≤ d(H k,t , H k,t+1 ) ≤ 1. Without loss of generality, we set .costk = 1, for any .k = 1, 2, . . . , K . Figure 7.1 visualizes the moderating effect of trust loss on the CRP. The blue rectangle presents the general procedure of the traditional consensus reaching models (e.g., [3, 8]). Our proposed feedback mechanism considering trust loss is shown in the green rectangle. For a cluster whose individual consensus is less than the consensus threshold, if its trust degree is large enough, the cluster can sacrifice some of the trust degree to reduce its consensus cost. It is important to note that a cluster cannot indefinitely compensate for the consensus cost by reducing trust, because a low trust degree means that the preference of the cluster is no longer recognized by others. Therefore, the trust threshold .T L is introduced to specify the bottom line of trust loss. We need to set .otsot (C k ) ≥ T L to ensure that the trust loss is valid. Remark 7.1 Sacrificing trust in exchange for the reduction of individual consensus cost has practical significance. An example is provided here. Trust represents the recognition of a DM based on reputation, experience, knowledge and other relevant

102

7 Punishment-Driven Consensus-Reaching Model Considering Trust Loss

factors, which is directly related to weight allocation. According to the weightdetermining method proposed in Chap. 4, the higher the trust degree, the greater the discourse power of corresponding DM in the group. Preference adjustment incurs individual consensus costs, which can be seen as a loss of DMs’ self-interest. To avoid self-interest damage, the DM is motivated to reduce the degree of preference adjustment in this round of decision by reducing its own trust.

7.4 Consensus Scenario Classification and Adjustment Strategies Definition 7.1 Let .otsot (C k ) be the overall trust degree of cluster .C k in the .t-th iteration, and .T L be the trust threshold. The active trust loss is computed by .

( ) AT L t (C k ) = f otsot (C k ), T L ,

(7.6)

where . f : (otsot (C k ), T L) → [0, 1] is a mapping function. Without loss of generality, we use the linear trust loss function as follows: .

( ) AT L t (C k ) = otsot (C k ) − T L · π t (C k ),

(7.7)

where .π t (C k )(0 ≤ π t (C k ) ≤ 1) is called the trust loss coefficient given by cluster k .C . The closer the coefficient is to 1, the greater the trust loss. Considering the role of trust loss, there are two thresholds associated with the CRP, i.e., .GC L and .T L. As a consequence, the following four categories of consensus scenarios are obtained: • High-high .(C k ∈ H H ): . I C L(H k,t ) ≥ GC I 2, . . . , K ; • High-low .(C k ∈ H L): . I C L(H k,t ) ≥ GC I 2, . . . , K ; • Low-high .(C k ∈ L H ): . I C L(H k,t ) < GC I 2, . . . , K ; • Low-low .(C k ∈ L L): . I C L(H k,t ) < GC I 2, . . . , K .

and .otsot (C k ) ≥ T L, .k = 1, and .otsot (C k ) < T L, .k = 1, and .otsot (C k ) ≥ T L, .k = 1, and .otsot (C k ) < T L, .k = 1,

Figure 7.2 presents a sample graph showing the above scenarios. The biggest red point is the group preference. The color-coded circles represent clusters belonging to different types of consensus scenarios, and the black dots are cluster centers. Each cluster is surrounded by several smiley faces, indicating its overall trust degree. The larger the number of smiley faces, the higher the overall trust of the associated cluster. According to the four consensus scenarios, we present different adjustment strategies for the opinions of the clusters.

7.4 Consensus Scenario Classification and Adjustment Strategies

LL

103

HL

G

GCL HH

LH

Fig. 7.2 Graphic display of four consensus scenarios

(1) Strategy for high-high scenario (S-HH). The cluster has a high trust degree and a high consensus degree, which indicates that its preference is close to the group preference and is highly trusted. In this case, there is no need to adjust the preference. (2) Strategy for high-low scenario (S-HL). The cluster has a low trust degree but a high consensus degree. Although the trust the cluster receives from others is not high enough, its preference is close to the majority. In this case, its preference does not need to be adjusted. (3) Strategy for low-high scenario (S-LH). The cluster has a high trust degree but a low consensus degree. As the cluster contributes less to the consensus, its preference should be adjusted. Meanwhile, the cluster can reduce the consensus cost by sacrificing appropriate trust because its overall trust is greater than the trust threshold. The corresponding adjustment strategy is presented as follows. (a) Identification rule for the clusters. Let .C LU L H be the set of clusters belonging to the scenario of LH, such that } { | C LU L H = C k | I C L(H k,t ) < GC L ∧ otsot (C k ) ≥ T L

.

(7.8)

(b) Identification rule for the pairs of alternatives. For any cluster.C k ∈ C LU L H , we identify the positions that should be modified by .

} { | P O S k,t,L H = (i, j)| I C L(h ik,tj ) < GC L ∧ C k ∈ C LU L H

(7.9)

(c) Generation of feedback recommendations. To soften the human supervision, we introduce the concept of punishment coefficient, initially defined by Yu et al. [9] and computed by

104

7 Punishment-Driven Consensus-Reaching Model Considering Trust Loss

.

pcik,tj

)ρ ⎧( ⎨ GC L−I C L(h ik,tj ) I C L(h ik,tj ) < GC L GC L , = ⎩ 0 I C L(h ik,tj ) ≥ GC L

(7.10)

where .ρ(0 ≤ ρ ≤ 1) is the power of the punishment coefficient used to indicate the urgency of reaching consensus. In this chapter, we set .ρ = 0.5. k,t k . pci j represents the degree to which cluster .C ’s preference is required to move toward the group preference. Clearly, .0 ≤ pcik,tj ≤ 1, for any .i, j = 1, 2, . . . , m. (d) Implementation of the moderating effect of trust loss. Given . pcik,tj , cluster .C k can reduce the punishment coefficient by actively giving a trust loss coefficient .π t (C k ) to sacrifice part of its trust degree. The following adjustment equation is used: k,t+1

.h i j

( ) )) ( ( AT L t (C k ) AT L t (C k ) G,t k,t = pcik,t + 1 − pc · p · h ik,t 1 − 1 − j ij ij j otsot (C k ) otsot (C k )

(7.11) k,t where .h ik,t+1 is the adjusted preference. Let . apc denote the updated punj ij ishment coefficient by trust loss, such that ( k,t k,t .apc i j = pci j

AT L t (C k ) 1− otsot (C k )

(

) = pcik,t j

1 − π t (C k )

(

TL 1− otsot (C k )

))

(7.12) We can see that cluster .C k should use the degree of . pcik,tj to adjust its preference, regardless of the trust loss. However, as.otsot (C k ) ≥ T L, it means that its punishment coefficient can be reduced. Therefore, Eq. (7.11) is changed to k,t+1 k,t k,t .h i j = apcik,tj · piG,t (7.13) j + (1 − apci j ) · pi j

(4) Strategy for low-low scenario (S-LL). The cluster has a low trust degree and a low consensus degree. In this case, the clusters preference needs to be adjusted, and the cluster cannot reduce consensus cost by losing trust. The adjustment strategy is implemented as follows. (a) Identification rule for the clusters. Let .C LU L L be the set of clusters belonging to the scenario of LL, such that C LU L L = {C k |I C L(H k,t ) < GC L ∧ otsot (C k ) < T L}

.

(7.14)

(b) Identification rule for the pairs of alternatives. For any cluster.C k ∈ C LU L L , we identify the positions that should be modified by .

P O S k,t,L L = {(i, j)|I C L(h ik,tj ) < GC L ∧ C k ∈ C LU L L }

(7.15)

7.4 Consensus Scenario Classification and Adjustment Strategies

105

LL HL HL G

GCL

HL/HH

HH

LH

Fig. 7.3 Graphic display of proposed adjustment strategies

(c) Generation of feedback recommendations. The calculation of punishment coefficient is the same as Eq. (7.10). The following formula on opinion adjustment is used: k,t k,t h k,t+1 = pcik,tj · piG,t j + (1 − pci j ) · h i j

. ij

(7.16)

The above-mentioned adjustment strategies are depicted in Fig. 7.3. Notice that since cluster is regarded as the basic decision unit in this chapter, the adjustment is not refined to the DMs’ preferences within the cluster in the CRP, but to the cluster’s preference. Therefore, the center of each cluster does not change relative to itself. As reflected in Fig. 7.3, the relative position of the black dot and the circle to which it belongs does not change. Based on the above analysis, we can conclude that the moderating effect of trust loss on the CRP, as Theorem 7.1. In other words, the adjustment strategies proposed in this chapter are effective for consensus improvement, but note that the reduction in consensus cost may lead to more rounds of consensus iterations. Theorem 7.1 If the trust loss is taken into account in the CRP, the consensus cost of a cluster can be reduced by sacrificing part of its trust degree. Proof Suppose. H k,t is the identified preference in the.t-th iteration. Let.otsot (C k ) ≥ T L to ensure that the trust loss is valid. Let . H k,t+1 and . H ,k,t+1 be the adjusted preferences by using Eqs. (7.13) and (7.16), respectively. Then, according to Eq. (7.4), we have that

106

7 Punishment-Driven Consensus-Reaching Model Considering Trust Loss ,k,t+1 CC k,t+1 −( CC ) ( )) ( k,t = costk · d H − H k,t+1 − d H k,t − H ,k,t+1 ) ( | k,t | 1/2 ∑m−1 ∑m k,t+1 | 1 . | = costk · m(m−1)/2 i=1 j=i+1 h i j − h i j ( | k,t |)1/2 ∑m−1 ∑m 1 |h − h ,k,t+1 | −costk · m(m−1)/2 , ij ij i=1 j=i+1

where .CC k,t+1 and .CC ,k,t+1 are the consensus cost with and without considering trust loss, respectively. As | k,t+1 | | ,k,t+1 | |h | | | − h ik,t − h ik,t i|j j − hi j j | k,t k,t G,t k,t k,t || G,t k,t k,t k,t || | | = apci j · pi j − (1 − apci j ) · h ik,t j − h i j − pc i j · pi j − (1 − pci j ) · h i j − h i j | | | | | = |apcik,t · ( p G,t − h ik,t )| − | pck,t · ( p G,t − h ik,t j ) (| j | i j| |)j | G,t i j k,t | i j k,t k,t |apc | − | pc | · | p − h | .= i( j ij (| ))i|j | (i j |) | | TL t 1− | − | pck,t | · | p G,t − h k,t | = | pik,t 1 − π t k ij ij ij otso)(C ) ) (| j ( k | | | | G,t | k,t || | | = |1 − πkt 1 − otsoTt L(C k ) | | − 1 · | pik,t j · pi j − h i j ≤ 0,

we have that CC k,t+1 − CC ,k,t+1 ( | k,t |)1/2 ∑m−1 ∑m 1 |h − h k,t+1 | = costk · m(m−1)/2 ij ij i=1 j=i+1 ( . | k,t |)1/2 ∑m−1 ∑m ,k,t+1 | 1 | h − h −costk · m(m−1)/2 i=1 ij ij j=i+1 ≤ 0. This completes the proof of Theorem 7.1. .



Through combining TCop-Kmeans algorithm in Chap. 4 and punishment-driven feedback mechanism in this chapter, a punishment-driven consensus reaching model (PDCRM) for SNLSDM problems with FPRs is concluded (see Algorithm 6). The model fully exploits the moderating role of trust in the processes of clustering and consensus reaching. In the clustering process, the moderating effect of trust is implemented in the form of trust constraints, while the moderating effect of trust is presented in the form of trust losses in the CRP.

7.5 Analysis of the Moderating Effect of Trust Loss on the CRP This section analyze the moderating effect of trust loss on the CRP in terms of consensus cost, trust degree, and consensus degree. The data used in this section are from Sect. 8.2. Three clusters (i.e., .C 2 , C 3 , C 4 ) contribute less to the consensus. As 0 4 3 4 0 3 .otso (C ) > T L and .otso (C ) > T L, clusters .C and .C can reduce the consensus costs by sacrificing their own trust degrees. Figure 7.4 visualizes the impact of

7.5 Analysis of the Moderating Effect of Trust Loss on the CRP

107

Algorithm 6 The procedure of PDCRM for SNLSDM problems with FPRs Require: Initial FPRs P l,0 (l = 1, 2, . . . , q), complete sociomatrix T M, number of clusters K , cannot-link matrix Con /= , parameters, GC L, T L, σ . Ensure: The ranking of alternatives. 1: Let t = 0. Classify the DMs into K clusters {C 1 , . . . , C K } by Algorithm 2. The preferences of the clusters can be obtained as H k,t , k = 1, 2, . . . , K . 2: Determine the weights of clusters by Eq. (4.8), denoted by σ k,t , k = 1, 2, . . . , K . 3: Compute the three-level consensus degrees via Eqs. (7.1)–(7.3). If GC L t ≥ GC L, then proceed to Step 6; otherwise, go to the next step. 4: Identify consensus scenarios and generate the corresponding adjustment strategies to guide clusters in modifying their preferences. Let t = t + 1, and then return to Step 2. ∗ 5: Output the final iterative time t ∗ = t and final clusters’ preferences H k,t , k = 1, 2, . . . , K . ∗ ∗ ∗ ∑K ∗ G,t G,t σ k,t · h ik,t 6: Calculate the final group preference as P G,t = ( pi j )m×m , where pi j = k=1 j . Then, compute the prior vector and obtain the rank of alternatives [10]. 7: End.

(a) Consensus cost

(b) Trust degree

(c) Consensus degree

(d) Group consensus cost and group consensus degree

Fig. 7.4 Visualization of the impact of trust loss on consensus cost, trust degree, and consensus degree

trust loss on consensus cost, trust degree, and consensus degree, when the trust loss coefficients .π 0 (C 3 ) and .π 0 (C 4 ) change from 0 to 1. First of all, we can conclude that the consensus cost decreases as the trust loss coefficient approaches 1. As shown in Fig. 7.4a, if setting .π 0 (C 3 ) = 0, we have

108

7 Punishment-Driven Consensus-Reaching Model Considering Trust Loss

that .CC 3,1 = 0.3839, but if .π 0 (C 3 ) increases to 1, the consensus cost increases by 0.8534. The increase of trust loss directly leads to the decrease of adjustment coefficient, which will reduce the adjustment amount of the individual preference and ultimately can result in the reduction of consensus cost. It should be emphasized that the reduction of individual consensus cost is at the cost of reducing corresponding trust degree. Figure 7.4b shows the situation that as .π 0 (C 3 ) and .π 0 (C 4 ) approach 1, the trust degrees of .C 3 and .C 4 decrease significantly. According to the decision-making logic of PDCRM, we know that trust loss is negatively correlated with consensus degree. This is because the closer the trust loss coefficient is to 1, the greater the trust loss is to reduce the consensus cost, so the consensus degree will not increase much. From Fig. 7.4c, we observe that when 0 3 3 .π (C ) approaches 1, the consensus degree of .C significantly decreases, and the 4 same situation happens to cluster .C . In particular, as both of .π 0 (C 3 ) and .π 0 (C 4 ) are close to 1, the group consensus declines. Figure 7.4d shows the relationship between group consensus and group consensus cost if trust loss coefficients vary. When using trust loss to modulate consensus cost and consensus degree, we hold that the following principles should be adopted. (1) There is an upper bound on trust loss. In other words, the cluster cannot minimize the consensus cost through an unlimited trust loss. When the trust value of a cluster falls below the trust threshold, it can no longer reduce the consensus cost by losing its trust. (2) Trust loss in PDCRM is an active adjustment strategy. The DM can independently determine whether to lose trust value or not. If it is deemed necessary to lose trust, the DM can judge how much trust to lose in terms of consensus level, trust level and consensus cost according to his/her situation.

7.6 Comparison with Other LSDM Consensus Models The CRP aims to bring individual preferences closer to the group preference to ensure that the group consensus is sufficiently high. Generally, there are two ways for preference adjustments: • In each consensus iteration, the cluster that has the largest distance from the group preference needs to be adjusted [11]. • In each consensus iteration, the clusters that contribute less to the consensus should be adjusted [12]. That is, there may be more than one cluster opinion that needs to be adjusted. This section compares our model with those proposed by Xu et al. [11] and Rodríguez et al. [12] respectively. To ensure comparability, we assume that the consensus models discussed below all treat the cluster as basic decision unit. That is, once the cluster structure is determined, it will not be changed in the subsequent decision process.

7.6 Comparison with Other LSDM Consensus Models

109

Table 7.1 Decision results using different LSDM consensus models Initial weight Initial NI NP Adjusted GCL cluster Xu et al. [11] .σ 1,0 2,0 .σ 3,0 .σ 4,0 .σ 1,0 Rodríguez .σ 2,0 et al. [12] .σ 3,0 .σ 4,0 .σ This chapter .σ 1,0 2,0 .σ 3,0 .σ 4,0 .σ

= 0.3, = 0.2, = 0.25, = 0.25 = 0.2559, = 0.2297, = 0.257, = 0.2574 = 0.2786, = 0.1469, = 0.2895, = 0.285

0.8748 1

10

0.8746 2

.1st

0.8729 1

.C

: 9, :5

3

.2nd

: C 3, C 4, 3 .2nd : C

13

.C

2 , .C 3 ,

.C

4

.1st

GCC

Final GCL

0.2744 0.881

0.2405 0.8826

0.8409 0.8989

Note NI: number of consensus iterations; NP: number of positions in the upper triangle of FPRs that should be adjusted

Here is a brief description of the above consensus models. Xu et al. [11] proposed a consensus model for LSDM problems, which focused on managing minority opinions and non-cooperative behaviors. Its main features are as follows: (i) it uses the number of DMs in a cluster as the standard to compute the weight of the cluster; and (ii) in each consensus iteration, only the cluster with the greatest deviation from the group preference is identified and required to be adjusted. Rodríguez et al. [12] presented an adaptive consensus model to address LSDM events, in which two thresholds regarding the consensus degree were set. The first one is the consensus degree for advice generation, i.e., .ϑ = 0.8. If the group consensus degree is lower than 0.8, then all DMs in clusters with their proximity degrees lower than the average proximity degree are selected. Another parameter is the consensus threshold, i.e.,.GC L = 0.88. Once the clusters and pair of alternatives were identified, a suggestion indicating the right directions of the preference changes (increase or decrease) is provided to improve the agreement among DMs. Table 7.1 shows the decision results using different consensus models when setting . GC L = 0.88. First of all, there are significant differences in weight calculation. As it contains the largest number of DMs, cluster is assigned the maximum weight by using Xu et al.’s model [11]. Rodríguez et al. [12] used the size and cohesion to reflect the weights of clusters. By this way, .C 4 enjoys the largest weight. Our model adds another parameter—overall trust degree, to the weight calculation (see Eq. (4.8) with 3 .β1 = β2 = β3 = 1). As a consequence, .C is given the largest weight. We consider that the size, cohesion and overall trust of a cluster are three factors closely related to its weight, which quantify the importance of the cluster from different perspectives. Besides, we find that different weight distributions lead to different group preferences. As a result, the initial group consensus degree varies. More importantly, the adjusted clusters differ significantly. Xu et al. [11] only requires one iteration, and the number of positions to be adjusted is less than that of the other two models. This

110

7 Punishment-Driven Consensus-Reaching Model Considering Trust Loss

is better than our model. But, if .GC L increases to 0.89, it will need five iterations and a total of 50 positions to be adjusted based on the simulation. In this case, it does not perform as well as our model. The adaptive consensus model provides the suggestion on the right direction to preference adjustment, but the amount of adjustment is up to the DMs themselves. In this way, some DMs adhere to the direction rules, but a low adjustment amount may increase consensus iteration. As shown in Table 7.1, using Rodríguez et al.’s model [12], two iterations are needed before the group consensus satisfies the threshold. In addition to the above comparison, Zhang et al. [13] proposed a consensus framework in which if a DM held a non-cooperative behavior, then other DMs would decrease the trust values with respect to this DM. Zhang et al.’s model [13] can be regarded as adopting a passive trust loss to punish the importance of non-cooperative DMs in group preference. We suggest that active trust losses are superior to passive trust losses in decisions that emphasize the satisfaction and experience of DMs. Being forced to adjust the individual preferences always reduces the decision satisfaction, which is made worse by a passive trust loss. However, if a DM, actively reduces his/her trust to avoid the drastic adjustment of his/her preference, he/she can achieve a balance between trust loss and consensus cost, thus ensuring decision satisfaction. Based on the above analysis, we can summarize the advantages of our proposal: (1) The weight-determining method combines three indices, namely, size, cohesion, and over trust degree. Compared with the method based on a single index [11] and the method relying on two indexes [12], the proposed method takes the trust measurement into account in the weight calculation, which is more comprehensive and reasonable in SNLSDM problems. (2) The punishment-driven CRP softens human supervision cost and often requires fewer consensus iterations, since it automatically generates adjustment strategies for all the clusters below the group consensus degree. The active trust loss in PDCRM provides DMs the right to balance trust loss and consensus cost, which can guarantee decision satisfaction more than passive trust loss due to the non-cooperative behaviors toward the consensus.

7.7 Conclusions This chapter proposed a punishment-driven consensus reaching model for SNLSDM events. The model is composed of three stages: the classification of DMs using TCopKmeans clustering algorithm, the implementation of punishment-driven CRP, and the selection process. Based on different degrees of consensus and trust, four types of consensus scenarios are distinguished and the corresponding adjustment strategies are generated. For the cluster belonging to LH, it can voluntarily sacrifice some of its trust degree in exchange for the lower consensus cost. For the cluster belonging to LL, the punishment coefficient of each position that needs to be adjusted in the individual preference is calculated and then the preference adjustment is implemented

References

111

automatically. The model provides experts with flexible adjustment strategies and explores the relationship between trust loss and consensus cost. Reducing the consensus cost and maintaining the trust degree are contradictory. It is an important issue to study the minimization of consensus cost and trust loss. In addition, the proposed consensus-reaching model can be extended to the decision problems in which the decision information is presented in more complex representations, such as linguistic assessment information [14–16], different types of preference relations [8, 17, 18], and so forth.

References 1. Tang, M., Liao, H., Xu, J., Streimikiene, D., & Zheng, X. (2020). Adaptive consensus reaching process with hybrid strategies for large-scale group decision making. European Journal of Operational Research, 282(3), 957–971. 2. Chao, X., Dong, Y., Kou, G., & Peng, Y. (2022). How to determine the consensus threshold in group decision making: A method based on efficiency benchmark using benefit and cost insight. Annals of Operations Research, 316(1), 143–177. 3. Xu, X. H., Du, Z. J., Chen, X. H., & Cai, C. G. (2019). Confidence consensus-based model for large-scale group decision making: A novel approach to managing non-cooperative behaviors. Information Sciences, 477, 410–427. 4. Du, Z. J., Luo, H. Y., Lin, X. D., & Yu, S. M. (2020). A trust-similarity analysis-based clustering method for large-scale group decision-making under a social network. Information Fusion, 63, 13–29. 5. Wu, J., Chiclana, F., & Herrera-Viedma, E. (2015). Trust based consensus model for social network in an incomplete linguistic information context. Applied Soft Computing, 35, 827– 839. 6. Dong, Y., Ding, Z., Martínez, L., & Herrera, F. (2017). Managing consensus based on leadership in opinion dynamics. Information Sciences, 397, 187–205. 7. Liu, X., Xu, Y., Montes, R., & Herrera, F. (2019). Social network group decision making: Managing self-confidence-based consensus model with the dynamic importance degree of experts and trust-based feedback mechanism. Information Sciences, 505, 215–232. 8. Gou, X., Xu, Z., Liao, H., & Herrera, F. (2020). Consensus model handling minority opinions and noncooperative behaviors in large-scale group decision-making under double hierarchy linguistic preference relations. IEEE Transactions on Cybernetics, 51(1), 283–296. 9. Yu, S., Du, Z., & Xu, X. (2021). Hierarchical punishment-driven consensus model for probabilistic linguistic large-group decision making with application to global supplier selection. Group Decision and Negotiation, 30(6), 1343–1372. 10. Xu, Z., & Da, Q. (2005). A least deviation method to obtain a priority vector of a fuzzy preference relation. European Journal of Operational Research, 164(1), 206–216. 11. Xu, X. H., Du, Z. J., & Chen, X. H. (2015). Consensus model for multi-criteria large-group emergency decision making considering non-cooperative behaviors and minority opinions. Decision Support Systems, 79, 150–160. 12. Rodríguez, R. M., Labella, Á., De Tré, G., & Martínez, L. (2018). A large scale consensus reaching process managing group hesitation. Knowledge-Based Systems, 159, 86–97. 13. Zhang, H., Palomares, I., Dong, Y., & Wang, W. (2018). Managing non-cooperative behaviors in consensus-based multiple attribute group decision making: An approach based on social network analysis. Knowledge-Based Systems, 162, 29–45. 14. Ji, P., Zhang, H. Y., & Wang, J. Q. (2018). A projection-based outranking method with multihesitant fuzzy linguistic term sets for hotel location selection. Cognitive Computation, 10(5), 737–751.

112

7 Punishment-Driven Consensus-Reaching Model Considering Trust Loss

15. Wan, S. P., Yan, J., & Dong, J. Y. (2022). Personalized individual semantics based consensus reaching process for large-scale group decision making with probabilistic linguistic preference relations and application to COVID-19 surveillance. Expert Systems with Applications, 191, 116328. 16. Yu, S. M., Wang, J., Wang, J. Q., & Li, L. (2018). A multi-criteria decision-making model for hotel selection with linguistic distribution assessments. Applied Soft Computing, 67, 741–755. 17. Chu, J., Wang, Y., Liu, X., & Liu, Y. (2020). Social network community analysis based largescale group decision making approach with incomplete fuzzy preference relations. Information Fusion, 60, 98–120. 18. Liang, R. X., Wang, J. Q., & Zhang, H. Y. (2018). A multi-criteria decision-making method based on single-valued trapezoidal neutrosophic preference relations with complete weight information. Neural Computing and Applications, 30(11), 3383–3398.

Chapter 8

Practical Applications

Abstract This chapter applies the proposed methods and models to several realworld decision-making scenarios to demonstrate their utility and feasibility, including coal mine safety assessment, social capital selection, and car-sharing service provider selection. Keywords Practical application · Information fusion · Clustering analysis · Consensus building

8.1 Application of TBA-Based Information Fusion Method in Coal Mine Safety Assessment This section presents a case study on coal mine safety assessment to illustrate the feasibility and application of proposed trust and behavior analysis-based information fusion method.

8.1.1 Case Description Data from the Coal Management Bureau of Hunan Province, China show that there were 10 coal mine accidents in Hunan, China, from January 2017 to August 2017, resulting in a total of 38 deaths (see Fig. 8.1 and Table 8.1). Hunan Administration of Work Safety decided to conduct safety evaluations of these 10 coal mines where accidents occurred. Relevant decision information is presented below. • To obtain accurate and impartial decision results, a special committee is established, including six experts from different departments (denoted as .{e1 , . . . , e6 }), namely an official in charge of safety supervision in Hunan Province (.e1 ), an expert

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Z. Du and S. Yu, Social Network Large-Scale Decision-Making, Uncertainty and Operations Research, https://doi.org/10.1007/978-981-99-7794-9_8

113

114

8 Practical Applications

Fig. 8.1 Distribution map of coal mine accidents in Hunan, China (January 1–August 31, 2017)

in mechanical and electrical engineering (.e2 ), an expert in coal mine production safety (.e3 ), an expert in environmental protection monitoring (.e4 ), an expert in psychology and behavior (.e5 ), and an armed police officer involved in the rescue efforts during the above coal mine accidents (.e6 ). • A given alternative pond, denoted as . X = {x1 , x2 , . . . , x10 }, including 10 coal mines presented in Table 8.1. • An attribute pond is established, i.e., . A = {a1 , a2 , . . . , a8 }, by accounting for legal provisions (including the Work Safety Law and the Labor Law of the People’s Republic of China, among others), consulting experts and scholars, and analyzing the literature on coal mine safety assessment (see Table 8.2). Tables 8.3 and 8.4 present the selection behaviors regarding attributes and alternatives, as well as the incomplete opinion of expert .e1 . Tables 8.5 and 8.6 show the selection behaviors regarding attributes and alternatives, as well as the incomplete opinion of expert .e2 . Tables 8.7 and 8.8 show the selection behaviors regarding attributes and alternatives, as well as the incomplete opinion of expert .e3 . Tables 8.9 and 8.10 show the selection behaviors regarding attributes and alternatives, as well as the incomplete opinion of expert .e4 . Tables 8.11 and 8.12 show the selection behaviors regarding attributes and alternatives, as well as the incomplete opinion of expert .e5 .

8.1 Application of TBA-Based Information Fusion Method in Coal …

115

Table 8.1 Statistics of coal mine accidents in Hunan, China (January 1–August 31, 2017) Coal mine Accident location Occurrence date Death toll Accident description Zhuzhou Loudi Chenzhou Loudi

Jan. 6 Feb. 14 Feb. 27 Mar. 5

1 10 1 1

Chenzhou Zhuzhou Zhuzhou

Mar. 28 Apr. 8 May. 7

3 1 18

.x9

Changsha Zhuzhou

Jun. 8 Jun. 8

1 1

. x 10

Chenzhou

Aug. 2

1

.x1 .x2 .x3 .x4 .x5 .x6 .x7 .x8

Table 8.2 Predetermined attribute pond Evaluation attributes

Roof accident Gas explosion Blasting accident Transportation accident Roof accident Roof accident Poisoning accident Roof accident Another type of accident Gas explosion

Detailed sub-attributes

Economy of the coal mine location (.a1 )

Gross domestic product, per capital local fiscal revenue, general public budget revenue, and fixed asset investment Equipment configuration status, equipment maintenance, and research and innovation Education plans, pre-job safety training, daily safety education, and special types of training Miner structure, technical level, cultural level, and safety awareness Leaders’ safety awareness, safety investment, safety culture, safety warnings, accident handling Noise control, lighting, dust control, temperature and humidity, and air quality Including state-owned, collective, and joint-stock cooperative enterprises Coal seam, hydrogeological conditions, top and bottom seam structure, gas conditions and degree of mechanization

Technology and equipment (.a2 ) Safety education (.a3 ) The quality of miners (.a4 ) Management level (.a5 )

Environmental safety (.a6 ) Enterprise type (.a7 ) Geological conditions (.a8 )

Sources Chen et al. [1], Liu et al. [3] Table 8.3 Selection results of attributes and alternatives for .e1 Case

.x1

.x2

.x3

.x4

.x5

.x6

.x7

.x8

.x9

. x 10 .a1

.a2

.a3

.a4

.a5

.a6

.a7

.a8

P

N

P

P

P

P

N

N

P

P

P

P

E

E

P

P

P

Note P–Positive, N–Negative, E–Empty

P

116

8 Practical Applications

Table 8.4 The incomplete opinion of .e1 .a1 .a2 .a3 0.4 0.27 0.2 0.16 0.4 0.13 0.11

.x1 .x3 .x4 .x5 .x6 .x9 . x 10

0.42 0.9 0.79 0.95 0.66 0 0.84

0.93 0.67 0.75 0.74 0.38 0.65 0.17

.a6

.a7

.a8

0.71 0.03 0.27 0.05 0.1 0.82 0.27

0.3 0.8 0.8 0.3 0.8 0.8 0.8

0.68 0.66 0.16 0.12 0.96 0.34 0.58

Table 8.5 Selection results of attributes and alternatives for .e2 . x 1 . x 2 . x 3 . x 4 . x 5 . x 6 . x 7 . x 8 . x 9 . x 10 .a1 .a2 Case

P

P

P

P

E

P

P

P

Table 8.6 The incomplete opinion of .e2 .a2 0.22 0.75 0.25 0.51 0.7 0.89 0.96 0.55 0.13

.x1 .x2 .x3 .x4 .x6 .x7 .x8 .x9 . x 10

P

P

N

P

P

N

P

P

P

P

N

N

.a4

.a5

.a6

.a7

.a8

P

E

E

E

N

P

.a3

.a8

0.14 0.15 0.25 0.81 0.24 0.93 0.25 0.62 0.5

0.4 0.3 0.7 0.1 0.54 0.75 0.38 0.77 0.93

Table 8.7 Selection results of attributes and alternatives for .e3 . x 1 . x 2 . x 3 . x 4 . x 5 . x 6 . x 7 . x 8 . x 9 . x 10 .a1 .a2 Case

.a3

P

P

N

P

.a3

.a4

.a5

.a6

.a7

.a8

P

P

P

P

N

P

Tables 8.13 and 8.14 present the selection behaviors regarding attributes and alternatives, as well as the incomplete opinion of expert .e6 .

8.1 Application of TBA-Based Information Fusion Method in Coal … Table 8.8 The incomplete opinion of .e3 .a2 .a3 .a4 0.3 0.5 0.5 0.3 0.7 0.2 0.8

.x1 .x3 .x4 .x5 .x6 .x9 . x 10

0.79 0.31 0.52 0.6 0.65 0.69 0.74

0.96 0.005 0.77 0.81 0.39 0.25 0.8

.a5

.a6

.a8

0.62 0.35 0.51 0.41 0.12 0.24 0.41

0.38 0.25 0.41 0.13 0.93 0.95 0.57

0.29 0.74 0.19 0.14 0.78 0.62 0.77

Table 8.9 Selection results of attributes and alternatives for .e4 . x 1 . x 2 . x 3 . x 4 . x 5 . x 6 . x 7 . x 8 . x 9 . x 10 .a1 .a2 Case

P

N

P

P

P

P

N

N

P

P

N

Table 8.10 The incomplete opinion of .e4 .a4 .x3 .x4 .x5 .x6 .x9 . x 10

E

P

N

P

P

P

P

N

Table 8.12 The incomplete opinion of .e5 .a4 .x1 .x3 .x4 .x5 .x6 .x9 . x 10

0.43 0.21 0.65 0.82 0.16 0.31 0.72

.a4

.a5

.a6

.a7

.a8

E

P

E

P

N

E

.a3

.a4

.a5

.a6

.a7

.a8

E

P

P

E

N

E

1 0.49 0.36 0.61 0.44 0.28 0.71

Table 8.11 Selection results of attributes and alternatives for .e5 . x 1 . x 2 . x 3 . x 4 . x 5 . x 6 . x 7 . x 8 . x 9 . x 10 .a1 .a2 Case

.a3

.a6

0.34 0.23 0.47 1 0.6 0.33 0.26

.x1

117

N

P

P

N

E

.a5

0.47 0.41 0.65 0.22 0.71 0.38 0.42

118

8 Practical Applications

Table 8.13 Selection results of attributes and alternatives for .e6 . x 1 . x 2 . x 3 . x 4 . x 5 . x 6 . x 7 . x 8 . x 9 . x 10 .a1 .a2 Case

P

P

P

P

P

P

P

P

P

P

N

E

Table 8.14 The incomplete opinion of .e6 .a3

.x3 .x4 .x5 .x6 .x7 .x8 .x9 . x 10

.a4

.a5

.a6

.a7

.a8

P

E

E

P

N

E

.a6

0.3 0.5 0.2 0.5 0.1 0.8 0.2 0.1 0.9 1

.x1 .x2

.a3

0.4 0.2 0.5 0.1 0.7 0.4 0.2 0.7 0.1 0.8

8.1.2 Decision Process Here use Algorithm 1 to address the SH-MAGDM problem. The following steps are conducted. Input: Comprehensive individual opinions .C V l , .l = 1, 2, . . . , 6, trust sociomatrices .T SoM E→N , .T SoM E→P , .T SoM N →P . The original comprehensive individual opinion of .e1 is obtained as ⎛

0.4 P ⎜ ∅N ⎜ ⎜ 0.27 P ⎜ P ⎜ 0.2 ⎜ ⎜ 0.16 P 1 .C V = ⎜ ⎜ 0.4 P ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ ⎝ 0.13 P 0.11 P

0.42 P ∅N 0.9 P 0.79 P 0.95 P 0.66 P ∅N ∅N 0P 0.84 P

0.93 P ∅N 0.67 P 0.75 P 0.74 P 0.38 P ∅N ∅N 0.65 P 0.17 P

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

0.71 P ∅N 0.03 P 0.27 P 0.05 P 0.1 P ∅N ∅N 0.82 P 0.27 P

0.3 P ∅N 0.8 P 0.8 P 0.3 P 0.8 P ∅N ∅N 0.8 P 0.3 P

⎞ 0.68 P ∅N ⎟ ⎟ 0.66 P ⎟ ⎟ 0.16 P ⎟ ⎟ 0.12 P ⎟ ⎟. 0.96 P ⎟ ⎟ ∅N ⎟ ⎟ ∅N ⎟ ⎟ 0.34 P ⎠ 0.58 P

The original comprehensive individual opinion of .e2 is presented as

8.1 Application of TBA-Based Information Fusion Method in Coal …



∅N ⎜ ∅N ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ 2 .C V = ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎝∅ ∅N

0.22 P 0.75 P 0.25 P 0.51 P ∅E 0.7 P 0.89 P 0.96 P 0.55 P 0.13 P

0.14 P 0.15 P 0.25 P 0.81 P ∅E 0.24 P 0.93 P 0.25 P 0.62 P 0.5 P

∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N

119

⎞ 0.4 P 0.3 P ⎟ ⎟ 0.7 P ⎟ ⎟ 0.1 P ⎟ ⎟ ∅E ⎟ ⎟. 0.54 P ⎟ ⎟ 0.75 P ⎟ ⎟ 0.38 P ⎟ ⎟ 0.77 P ⎠ 0.93 P

The original comprehensive individual opinion of .e3 is presented as ⎛

∅N ⎜ ∅N ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ 3 .C V = ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎝∅ ∅N

0.3 P ∅N 0.5 P 0.5 P 0.3 P 0.7 P ∅N ∅N 0.2 P 0.8 P

0.79 P ∅N 0.31 P 0.52 P 0.6 P 0.65 P ∅N ∅N 0.69 P 0.74 P

0.96 P ∅N 0.005 P 0.77 P 0.81 P 0.39 P ∅N ∅N 0.25 P 0.8 P

0.62 P ∅N 0.35 P 0.51 P 0.41 P 0.12 P ∅N ∅N 0.24 P 0.41 P

0.38 P ∅N 0.25 P 0.41 P 0.13 P 0.93 P ∅N ∅N 0.95 P 0.57 P

∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N

⎞ 0.29 P ∅N ⎟ ⎟ 0.74 P ⎟ ⎟ 0.19 P ⎟ ⎟ 0.14 P ⎟ ⎟. 0.78 P ⎟ ⎟ ∅N ⎟ ⎟ ∅N ⎟ ⎟ 0.62 P ⎠ 0.77 P

The original comprehensive individual opinion of .e4 is obtained as ⎛

∅N ⎜ ∅N ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ 4 .C V = ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎝∅ ∅N

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

0.34 P ∅N 0.23 P 0.47 P 1P 0.6 P ∅N ∅N 0.33 P 0.26 P

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

1P ∅N 0.49 P 0.36 P 0.61 P 0.44 P ∅N ∅N 0.28 P 0.71 P

∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N

⎞ ∅E ∅N ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟. ∅E ⎟ ⎟ ∅N ⎟ ⎟ ∅N ⎟ ⎟ ∅E ⎠ ∅E

The original comprehensive individual opinion of .e5 is obtained as

120

8 Practical Applications



∅N ⎜ ∅N ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ 5 .C V = ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎝∅ ∅N

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

0.43 P ∅N 0.21 P 0.65 P 0.82 P 0.16 P ∅N ∅N 0.31 P 0.72 P

0.47 P ∅N 0.41 P 0.65 P 0.22 P 0.71 P ∅N ∅N 0.38 P 0.42 P

∅E ∅N ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N

⎞ ∅E ∅N ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟. ∅E ⎟ ⎟ ∅N ⎟ ⎟ ∅N ⎟ ⎟ ∅E ⎠ ∅E

The original comprehensive individual opinion of .e6 is presented as ⎛

∅N ⎜ ∅N ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ 6 .C V = ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎝∅ ∅N

∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

0.3 P 0.5 P 0.2 P 0.5 P 0.1 P 0.8 P 0.2 P 0.1 P 0.9 P 1P

∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

0.4 P 0.2 P 0.5 P 0.1 P 0.7 P 0.4 P 0.2 P 0.7 P 0.1 P 0.8 P

∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N

⎞ ∅E ∅E ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟. ∅E ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎠ ∅E

The trust sociomatrices is presented as follows. ⎛

.

T SoM E→N

− ⎜ 0.34 ⎜ ⎜ 0.79 =⎜ ⎜ 0.31 ⎜ ⎝ 0.52 0.16 ⎛

.

T SoM E→P

− ⎜ 0.43 ⎜ ⎜ 0.91 =⎜ ⎜ 0.18 ⎜ ⎝ 0.26 0.15

0.25 − 0.6 0.26 0.65 0.68

0.74 0.45 − 0.22 0.91 0.15

0.82 0.53 0.99 − 0.07 0.44

0.1 0.96 0 0.81 − 0.86

⎞ 0.08 0.39 ⎟ ⎟ 0.83 ⎟ ⎟, 0.26 ⎟ ⎟ 0.8 ⎠ −

0.14 − 0.86 0.58 0.55 0.14

0.85 0.62 − 0.35 0.51 0.4

0.07 0.24 0.18 − 0.24 0.42

0.05 0.9 0.94 0.49 − 0.48

⎞ 0.33 0.9 ⎟ ⎟ 0.37 ⎟ ⎟, 0.11 ⎟ ⎟ 0.78 ⎠ −

8.1 Application of TBA-Based Information Fusion Method in Coal …



.

T SoM N →P

− ⎜ 0.23 ⎜ ⎜ 0.35 =⎜ ⎜ 0.82 ⎜ ⎝ 0.02 0.04

0.17 − 0.65 0.45 0.54 0.3

0.74 0.19 − 0.68 0.18 0.36

0.78 0.08 0.92 − 0.77 0.48

0.43 0.44 0.3 0.5 − 0.51

121

⎞ 0.81 0.79 ⎟ ⎟ 0.13 ⎟ ⎟. 0.24 ⎟ ⎟ 0.6 ⎠ −

Experts select the evaluation systems of coal safety assessment in [1, 3] by adding two attributes (i.e.,.a1 ,.a8 with.ω1 = 0.1, .ω8 = 0.1). The weight vector of attributes is obtained as.ω = (0.0833, 0.1924, 0.0405, 0.0749, 0.0899, 0.1334, 0.3022, 0.0833)T . Step 1: The individual opinions .C V l = (vil,Case )10×8 are normalized into .C R l = j l,Case )10×8 . (ri j ⎛

0.8529 P ⎜ ∅N ⎜ ⎜ 0.4706 P ⎜ ⎜ 0.2647 P ⎜ ⎜ 0.1471 P 1 .C R = ⎜ P ⎜1 ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ ⎝ 0.0588 P 0P ⎛

∅N ⎜ ∅N ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ 2 .C R = ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎝∅ ∅N

0.413 P ∅N 0.9348 P 0.8152 P 0.9891 P 0.6739 P ∅N ∅N 0P 0.8696 P

0.9222 P ∅N 0.6333 P 0.7222 P 0.7111 P 0.3111 P ∅N ∅N 0.6111 P 0.0778 P

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

0.701 P ∅N 0P 0.2474 P 0.0206 P 0.0722 P ∅N ∅N 0.8144 P 0.2474 P

0.1957 P 0.7717 P 0.2283 P 0.5109 P ∅E 0.7174 P 0.9239 P 1P 0.5543 P 0.0978 P

0.0444 P 0.0556 P 0.1667 P 0.7889 P ∅E 0.1556 P 0.9222 P 0.1667 P 0.5778 P 0.4444 P

∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N

0P ∅N 1P 1P 0P 1P ∅N ∅N 1P 1P

⎞ 0.6744 P ⎟ ∅N ⎟ 0.6512 P ⎟ ⎟ 0.0698 P ⎟ ⎟ 0.0233 P ⎟ ⎟, ⎟ 1P ⎟ ⎟ ∅N ⎟ ⎟ ∅N ⎟ 0.2791 P ⎠ 0.5581 P

⎞ 0.3488 P 0.2326 P ⎟ ⎟ 0.6977 P ⎟ ⎟ ⎟ 0P ⎟ E ⎟ ∅ ⎟ P ⎟, 0.5116 ⎟ 0.7558 P ⎟ ⎟ 0.3256 P ⎟ ⎟ 0.7791 P ⎠ 0.9651 P

122

8 Practical Applications



∅N ⎜ ∅N ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ 3 .C R = ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎝∅ ∅N

0.2826 P ∅N 0.5 P 0.5 P 0.2826 P 0.7174 P ∅N ∅N 0.1739 P 0.8261 P

0.7667 P ∅N 0.2333 P 0.4667 P 0.5556 P 0.6111 P ∅N ∅N 0.6556 P 0.7111 P



∅N ⎜ ∅N ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ 4 .C R = ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎝∅ ∅N ⎛

∅N ⎜ ∅N ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ 5 .C R = ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎝∅ ∅N ⎛

∅N ⎜ ∅N ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ 6 .C R = ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎜∅ ⎜ N ⎝∅ ∅N

0.9598 P ∅N 0P 0.7688 P 0.809 P 0.3869 P ∅N ∅N 0.2462 P 0.799 P

0.8475 P ∅N 0.3898 P 0.661 P 0.4915 P 0P ∅N ∅N 0.2034 P 0.4915 P

⎞ 0.2209 P ⎟ ∅N ⎟ P⎟ 0.7442 ⎟ 0.1047 P ⎟ ⎟ 0.0465 P ⎟ ⎟, 0.7907 P ⎟ ⎟ ⎟ ∅N ⎟ N ⎟ ∅ ⎟ P⎠ 0.6047 0.7791 P

∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N

⎞ ∅E ∅N ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟, ∅E ⎟ ⎟ ∅N ⎟ ⎟ ∅N ⎟ ⎟ ∅E ⎠ ∅E

∅E ∅N ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N

⎞ ∅E ∅N ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟, ∅E ⎟ ⎟ ∅N ⎟ ⎟ ∅N ⎟ ⎟ ∅E ⎠ ∅E

0.3814 P 0.1753 P 0.4845 P 0.0722 P 0.6907 P 0.3814 P 0.1753 P 0.6907 P 0.0722 P 0.7938 P

∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N

⎞ ∅E ∅E ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟. ∅E ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎟ ⎟ ∅E ⎠ ∅E

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

0.3367 P ∅N 0.2261 P 0.4673 P 1P 0.598 P ∅N ∅N 0.3266 P 0.2563 P

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

0.4271 P ∅N 0.206 P 0.6482 P 0.8191 P 0.1558 P ∅N ∅N 0.3065 P 0.7186 P

0.5932 P ∅N 0.4915 P 0.8983 P 0.1695 P 1P ∅N ∅N 0.4407 P 0.5085 P

∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

0.2222 P 0.4444 P 0.1111 P 0.4444 P 0P 0.7778 P 0.1111 P 0P 0.8889 P 1P

∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N ∅N

1P ∅N 0.4742 P 0.3402 P 0.5979 P 0.4227 P ∅N ∅N 0.2577 P 0.701 P

∅E ∅N ∅E ∅E ∅E ∅E ∅N ∅N ∅E ∅E

∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E ∅E

0.3608 P ∅N 0.2268 P 0.3918 P 0.1031 P 0.9278 P ∅N ∅N 0.9485 P 0.5567 P

8.1 Application of TBA-Based Information Fusion Method in Coal …

123

Step 2: Apply Eq. (3.19) to calculate the distances among experts’ opinions, and use Eq. (3.25) to compute the weights of DMs, i.e., a T .λ = (0.1916, 0.1391, 0.2391, 0.1557, 0.1159, 0.1186) . Steps 3–4: Use the complementary method presented in Sect. 3.3.4 to obtain the ), .l = 1, 2, . . . , 6. fused individual opinions . RC l = (r cil,Case j ⎛

0.8529 P ⎜ 0N ⎜ ⎜ 0.4706 P ⎜ ⎜ 0.2647 P ⎜ ⎜ 0.1471 P 1 . RC = ⎜ P ⎜1 ⎜ N ⎜0 ⎜ N ⎜0 ⎜ ⎝ 0.0588 P 0P ⎛

0N ⎜ 0N ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎜0 2 . RC = ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎝0 0N ⎛

0N ⎜ 0N ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎜0 3 . RC = ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎝0 0N

0.413 P 0N 0.9348 P 0.8152 P 0.9891 P 0.6739 P 0N 0N 0P 0.8696 P

0.9222 P 0N 0.6333 P 0.7222 P 0.7111 P 0.3111 P 0N 0N 0.6111 P 0.0778 P

0.6506 E 0N 0.1155 E 0.6552 E 0.8636 E 0.3837 E 0N 0N 0.2839 E 0.6303 E

0.7559 E 0N 0.4264 E 0.7465 E 0.3755 E 0.3603 E 0N 0N 0.2889 E 0.4976 E

0.701 P 0N 0P 0.2474 P 0.0206 P 0.0722 P 0N 0N 0.8144 P 0.2474 P

0P 0N 1P 1P 0P 1P 0N 0N 1P 1P

⎞ 0.6744 P ⎟ 0N ⎟ P⎟ 0.6512 ⎟ 0.0698 P ⎟ ⎟ 0.0233 P ⎟ ⎟, ⎟ 1P ⎟ ⎟ 0N ⎟ ⎟ 0N ⎟ 0.2791 P ⎠ 0.5581 P

0.1957 P 0.7717 P 0.2283 P 0.5109 P 0.745 P 0.7174 P 0.9239 P 1P 0.5543 P 0.0978 P

0.0444 P 0.0556 P 0.1667 P 0.7889 P 0.5348 P 0.1556 P 0.9222 P 0.1667 P 0.5778 P 0.4444 P

0.6506 E 0E 0.1155 E 0.6552 E 0.8636 E 0.3837 E 0E 0E 0.2839 E 0.6303 E

0.7559 E 0E 0.4264 E 0.7465 E 0.3755 E 0.3603 E 0E 0E 0.2889 E 0.4976 E

0.6098 E 0.0247 E 0.1992 E 0.2673 E 0.2092 E 0.3777 E 0.0247 E 0.0974 E 0.6494 E 0.4735 E

0N 0N 0N 0N 0N 0N 0N 0N 0N 0N

⎞ 0.3488 P 0.2326 P ⎟ ⎟ 0.6977 P ⎟ ⎟ ⎟ 0P ⎟ P⎟ 0.0313 ⎟ , 0.5116 P ⎟ ⎟ 0.7558 P ⎟ ⎟ 0.3256 P ⎟ ⎟ 0.7791 P ⎠ 0.9651 P

0.2826 P 0N 0.5 P 0.5 P 0.2826 P 0.7174 P 0N 0N 0.1739 P 0.8261 P

0.7667 P 0N 0.2333 P 0.4667 P 0.5556 P 0.6111 P 0N 0N 0.6556 P 0.7111 P

0.9598 P 0N 0P 0.7668 P 0.809 P 0.3869 P 0N 0N 0.2462 P 0.799 P

0.8475 P 0N 0.3898 P 0.661 P 0.4915 P 0P 0N 0N 0.2034 P 0.4915 P

0.3608 P 0N 0.2268 P 0.3918 P 0.1031 P 0.9278 P 0N 0N 0.9485 P 0.5567 P

0N 0N 0N 0N 0N 0N 0N 0N 0N 0N

⎞ 0.2209 P ⎟ 0N ⎟ 0.7442 P ⎟ ⎟ 0.1047 P ⎟ ⎟ 0.0465 P ⎟ ⎟, 0.7907 P ⎟ ⎟ ⎟ 0N ⎟ ⎟ 0N ⎟ 0.6047 P ⎠ 0.7791 P

124

8 Practical Applications



0N ⎜ 0N ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎜0 4 . RC = ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎝0 0N ⎛

0N ⎜ 0N ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎜0 5 . RC = ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎝0 0N ⎛

0N ⎜ 0N ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎜0 6 . RC = ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎜0 ⎜ N ⎝0 0N

0.329 E 0N 0.6587 E 0.6621 E 0.745 E 0.6954 E 0N 0N 0.1719 P 0.6833 P

0.6127 E 0N 0.3741 E 0.6352 E 0.5348 E 0.4196 E 0N 0N 0.6566 P 0.4309 P

0.3367 P 0N 0.2261 P 0.4673 P 1P 0.598 P 0N 0N 0.3266 P 0.2563 P

0.7559 E 0N 0.4264 E 0.7465 E 0.3755 E 0.3603 E 0N 0N 0.2889 P 0.4976 P

1P 0N 0.4742 P 0.3402 P 0.5979 P 0.4227 P 0N 0N 0.2577 P 0.701 P

0N 0N 0N 0N 0N 0N 0N 0N 0N 0N

⎞ 0.4847 E ⎟ 0N ⎟ E⎟ 0.6847 ⎟ 0.059 E ⎟ ⎟ 0.0313 E ⎟ ⎟, 0.8335 E ⎟ ⎟ ⎟ 0N ⎟ N ⎟ 0 ⎟ P⎠ 0.4793 0.7093 P

0.329 E 0N 0.6587 E 0.6621 E 0.745 E 0.6954 E 0N 0N 0.1719 E 0.6833 E

0.6127 E 0N 0.3741 E 0.6352 E 0.5348 E 0.4196 E 0N 0N 0.6566 E 0.4309 E

0.4271 P 0N 0.206 P 0.6482 P 0.8191 P 0.1558 P 0N 0N 0.3065 P 0.7186 P

0.5932 P 0N 0.4915 P 0.8983 P 0.1695 P 1P 0N 0N 0.4407 P 0.5085 P

0.6098 E 0N 0.1992 E 0.2673 E 0.2292 E 0.3777 E 0E 0E 0.6494 E 0.4735 E

0N 0N 0N 0N 0N 0N 0N 0N 0N 0N

⎞ 0.4847 E ⎟ 0N ⎟ 0.6847 E ⎟ ⎟ 0.059 E ⎟ ⎟ 0.0313 E ⎟ ⎟, 0.8335 E ⎟ ⎟ ⎟ 0N ⎟ N ⎟ 0 ⎟ E⎠ 0.7093 0.4793 E

0.329 E 0.1337 E 0.6587 E 0.6621 E 0.745 E 0.6954 E 0.1601 E 0.1732 E 0.1719 E 0.6883 E

0.2222 P 0.4444 P 0.1111 P 0.4444 P 0P 0.7778 P 0.1111 P 0P 0.8889 P 1P

0.6506 E 0E 0.1155 E 0.6552 E 0.8636 E 0.3837 E 0E 0E 0.2839 E 0.6303 E

0.7559 E 0E 0.4264 E 0.7465 E 0.3755 E 0.3603 E 0E 0E 0.2889 E 0.4976 E

0.3814 P 0.1753 P 0.4845 P 0.0722 P 0.6907 P 0.3814 P 0.1753 P 0.6907 P 0.0722 P 0.7938 P

0N 0N 0N 0N 0N 0N 0N 0N 0N 0N

⎞ 0.4847 E 0.0476 E ⎟ ⎟ 0.6847 E ⎟ ⎟ 0.059 E ⎟ ⎟ 0.0313 E ⎟ ⎟. 0.8335 E ⎟ ⎟ 0.1309 E ⎟ ⎟ 0.0564 E ⎟ ⎟ 0.4793 E ⎠ 0.7093 E

Step 5: Calculate the group opinion as follows:

8.2 Application of PDCRM in Social Capital Selection



0.1634 ⎜0 ⎜ ⎜ 0.0902 ⎜ ⎜ 0.0507 ⎜ ⎜ 0.0282 G .R =⎜ ⎜ 0.1916 ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎝ 0.0113 0

0.3155 0.1232 0.6138 0.6316 0.6812 0.6996 0.1475 0.1596 0.1926 0.6717

0.5834 0.0605 0.33 0.6103 0.5101 0.4504 0.1414 0.0232 0.6643 0.4997

0.6408 0 0.1192 0.652 0.8649 0.3823 0 0 0.2851 0.6261

0.7524 0 0.4278 0.7497 0.3711 0.3739 0 0 0.2921 0.4978

125

0.6014 00242 0.2443 0.2815 0.2713 0.4581 0.0242 0.0955 0.623 0.5235

⎞ 0 0.4391 0 0.038 ⎟ ⎟ 0.1916 0.6943 ⎟ ⎟ 0.1916 0.0638 ⎟ ⎟ 0 0.0334 ⎟ ⎟. 0.1916 0.8104 ⎟ ⎟ 0 0.1207 ⎟ ⎟ 0 0.052 ⎟ ⎟ 0.1916 0.5485 ⎠ 0.1916 0.6968

Step 6: Output the overall evaluation values of the alternatives as follows: . E(x 1 ) = 0.3304, . E(x 2 ) = 0.0326, . E(x 3 ) = 0.3347, . E(x 4 ) = 0.3675, . E(x 5 ) = 0.2912, . E(x6 ) = 0.4176, . E(x7 ) = 0.0474, . E(x8 ) = 0.0487, . E(x9 ) = 0.2992, . E(x 10 ) = 0.427. We can rank the alternatives as.x10 ≻ x6 ≻ x4 ≻ x3 ≻ x1 ≻ x9 ≻ x5 ≻ x8 ≻ x7 ≻ x2 , namely .x10 is the best alternative. As can be seen from the above case study, the most important feature of the TSAbased information fusion method is to analyze the reasons for generating structureheterogeneous evaluation information from the perspective of selection behavior as a breakthrough in dealing with heterogeneous information.

8.2 Application of PDCRM in Social Capital Selection This section presents an illustrative example to verify the feasibility of PDCRM for SNLSDM problems with FPRs.

8.2.1 Case Description The Chinese economy has entered a new normal of “valuing quality and efficiency”. As a new strategy for urban development, the concept of smart city is proposed, which refers to the integration of city systems and services through information and communication technologies. The traditional mode of governments self-establishment and self-management is increasingly difficult to meet the needs of smart cities. Publicprivate partnership (PPP) is considered to be a better choice as an innovative mode, through which the government works with social capital to participate in the construction of infrastructure and public services. Therefore, social capital selection is one of the most important issues. A central Chinese city planned to use the PPP mode to upgrade its wireless government network by bringing in social capitals. After the government issued the bidding announcement, five social capitals actively responded to the bidding doc-

126

8 Practical Applications e15

e10

e6

e13

e14

e19 e1

e11 e8

e16 e17

e12 e2 e4

e18

e9

e7

e5 e20

e3

Fig. 8.2 Visualization of an undirected social network

uments (labeled as .{x1 , x2 , x3 , x4 , x5 }). Social capital .x1 has rich technical reserve and construction experience, but less experience in implementing the PPP mode. . x 2 is a traditional construction enterprise with abundant capital. . x 3 has experience running PPP projects but has not performed well on the capital. .x4 and .x5 have good experience in capital reserve and PPP operation, but lack the experience in wireless network construction. No social capital has a complete advantage. To conduct a comprehensive assessment, the government invites 20 experts from government agencies, enterprises, and universities to provide their preferences for social capitals. Then, they provide trust statements about each other based on reputation, knowledge and personal preferences. Figure 8.2 shows the undirected social network obtained through the UCINET software, in which cannot-link edges are marked in red (including .(e2 , e8 ), .(e3 , e5 ), .(e3 , e7 ), .(e3 , e9 ), .(e3 , e13 ), .(e7 , e13 ), .(e9 , e12 ), .(e9 , e13 ), .(e13 , e16 ), .(e16 , e19 )). We see that the experts related to cannot-link edges are as .e2 , e3 , e5 , e7 , e8 , e9 , e12 , e13 , e16 , e19 .

8.2.2 Using PDCRM to Solve the Problem Algorithms 2 and 6 is applied to manage the decision-making process. Input: Initial FPRs . P l,0 , trust sociometrix .T SoM, designated number of clusters . K = 4, constraint matrix .Con / = , consensus threshold . GC L = 0.88, trust threshold . T L = 0.5, and parameter .ρ = 0.5. The initial individual FPRs are presented as follows.

8.2 Application of PDCRM in Social Capital Selection ⎛

0.5 ⎜ 0.15 ⎜ 1,0 .P = ⎜ 0.4665 ⎝ 0.06 0.6547 ⎛

0.5 ⎜ 0.2129 ⎜ 3,0 .P = ⎜ 0.6083 ⎝ 0.7249 0.3675 ⎛

0.5 ⎜ 0.6433 ⎜ 5,0 .P = ⎜ 0.7109 ⎝ 0.2014 0.947 ⎛

0.5 ⎜ 0.2128 ⎜ . P 7,0 = ⎜ 0.2961 ⎝ 0.5971 0.1345 ⎛

0.5 ⎜ 0.7321 ⎜ 9,0 .P = ⎜ 0.5994 ⎝ 0.8514 0.8987 ⎛

0.5 ⎜ 0.4446 ⎜ . P 11,0 = ⎜ 0.0958 ⎝ 0.9717 0.193 ⎛

0.5 ⎜ 0.6944 ⎜ 13,0 .P = ⎜ 0.6312 ⎝ 0.9798 0.8933 ⎛

0.5 ⎜ 0.1522 ⎜ . P 15,0 = ⎜ 0.5156 ⎝ 0.0317 0.7921 ⎛

0.5 ⎜ 0.6087 ⎜ 17,0 .P = ⎜ 0.4406 ⎝ 0.7197 0.1934 ⎛

0.5 ⎜ 0.5583 ⎜ 19,0 .P = ⎜ 0.9911 ⎝ 0.645 0.1113

127

0.15 0.5 0.6859 0.6569 0.083

0.4665 0.6849 0.5 0.8916 0.2292

0.06 0.6569 0.8916 0.5 0.201

⎞ ⎛ 0.6547 0.5 0.4216 0.083 ⎟ ⎜ 0.4216 0.5 ⎟ 2,0 ⎜ 0.2292 ⎟ , P = ⎜ 0.6584 0.157 ⎝ 0.7667 0.9473 0.201 ⎠ 0.5 0.2457 0.8391

0.6584 0.157 0.5 0.829 0.0564

0.7667 0.9473 0.829 0.5 0.4232

⎞ 0.2457 0.8391 ⎟ ⎟ 0.0564 ⎟ , 0.4232 ⎠ 0.5

0.2129 0.5 0.3734 0.8412 0.3506

0.6083 0.3734 0.5 0.3729 0.8278

0.7249 0.8412 0.3729 0.5 0.809

⎞ ⎛ 0.3675 0.5 0.7601 0.3506 ⎟ ⎜ 0.7601 0.5 ⎟ 4,0 ⎜ 0.8278 ⎟ , P = ⎜ 0.8564 0.8084 ⎝ 0.7547 0.1464 0.809 ⎠ 0.5 0.103 0.9056

0.8564 0.8084 0.5 0.7655 0.158

0.7547 0.1464 0.7655 0.5 0.3289

⎞ 0.103 0.9056 ⎟ ⎟ 0.158 ⎟ , 0.3289 ⎠ 0.5

0.6433 0.5 0.9093 0.6875 0.5779

0.7109 0.9093 0.5 0.4374 0.5419

0.2014 0.6875 0.4374 0.5 0.2733

⎛ ⎞ 0.947 0.5 0.8396 0.5779 ⎟ ⎜ 0.8396 0.5 ⎟ 6,0 ⎜ 0.5419 ⎟ , P = ⎜ 0.2633 0.1607 ⎝ 0.3238 0.6642 0.2733 ⎠ 0.5 0.3795 0.0511

0.2633 0.1607 0.5 0.5183 0.6247

0.3238 0.6642 0.5183 0.5 0.8188

⎞ 0.3795 0.0511 ⎟ ⎟ 0.6247 ⎟ , 0.8188 ⎠ 0.5

0.2128 0.5 0.4379 0.8105 0.3157

0.2961 0.4379 0.5 0.3602 0.8024

0.5971 0.8105 0.3602 0.5 0.8065

⎛ ⎞ 0.1345 0.5 0.3157 ⎟ ⎜ 0.4383 ⎜ ⎟ 0.8024 ⎟ , P 8,0 = ⎜ 0.6558 ⎝ 0.6771 0.8065 ⎠ 0.5 0.3129

0.4383 0.5 0.1557 0.9347 0.8467

0.6558 0.1557 0.5 0.8455 0.0636

0.6771 0.9347 0.8455 0.5 0.4546

0.5

0.7321 0.5 0.1879 0.3836 0.4163

0.5994 0.1879 0.5 0.1283 0.0095

0.8514 0.3836 0.1283 0.5 0.576

⎛ ⎞ 0.8987 0.5 0.1048 0.4163 ⎟ ⎜ 0.1048 0.5 ⎟ 10,0 ⎜ 0.0095 ⎟ , P = ⎜ 0.7783 0.69 ⎝ 0.4963 0.8423 0.576 ⎠ 0.5 0.7808 0.8776

0.7783 0.69 0.5 0.6105 0.753

0.4963 0.8423 0.6105 0.5 0.5637

⎞ 0.7808 0.8776 ⎟ ⎟ 0.753 ⎟ , 0.5637 ⎠ 0.5

0.4446 0.5 0.2998 0.2508 0.093

0.0958 0.2998 0.5 0.6136 0.7037

0.9717 0.2508 0.6136 0.5 0.3147

0.4906 0.5 0.6323 0.2915 0.1739 0.9298

0.41 0.6323 0.5 0.2349 0.3563

0.763 0.2915 0.2349 0.5 0.1457

0.6944 0.5 0.2358 0.561 0.9358

0.6312 0.2358 0.5 0.8576 0.6704

0.9798 0.561 0.8576 0.5 0.1302

⎞ ⎛ 0.8933 0.5 0.9308 0.9358 ⎟ ⎜ 0.9308 0.5 ⎟ 14,0 ⎜ 0.6704 ⎟ , P = ⎜ 0.7653 0.4893 ⎝ 0.2028 0.6267 0.1302 ⎠ 0.5 0.7708 0.2654

0.7653 0.4893 0.5 0.1153 0.091

0.2028 0.6267 0.1153 0.5 0.8427

⎞ 0.7708 0.2654 ⎟ ⎟ 0.091 ⎟ , 0.8427 ⎠ 0.5

0.1522 0.5 0.7583 0.4497 0.0357

0.5156 0.7583 0.5 0.8825 0.8227

0.0317 0.4497 0.8825 0.5 0.4746

⎞ ⎛ 0.7921 0.5 0.6527 0.0357 ⎟ ⎜ 0.6527 0.5 ⎟ 16,0 ⎜ 0.8227 ⎟ , P = ⎜ 0.8468 0.1202 ⎝ 0.3581 0.1644 0.4746 ⎠ 0.5 0.2136 0.0665

0.8468 0.1202 0.5 0.3889 0.4232

0.3581 0.1644 0.3889 0.5 0.2317

⎞ 0.2136 0.0665 ⎟ ⎟ 0.4232 ⎟ , 0.2317 ⎠ 0.5

0.6087 0.5 0.2478 0.954 0.5844

0.4406 0.2478 0.5 0.2998 0.1016

0.7197 0.954 0.2998 0.5 0.781

⎞ ⎛ 0.1934 0.5 0.2429 0.5844 ⎟ ⎜ 0.2429 0.5 ⎟ 18,0 ⎜ 0.1016 ⎟ , P = ⎜ 0.7813 0.6532 ⎝ 0.7773 0.8165 0.781 ⎠ 0.5 0.6467 0.7267

0.7813 0.6532 0.5 0.7529 0.0976

0.7773 0.8165 0.7529 0.5 0.6824

⎞ 0.6467 0.7267 ⎟ ⎟ 0.0976 ⎟ , 0.6824 ⎠ 0.5

0.5583 0.5 0.3308 0.7201 0.3335

0.9911 0.3308 0.5 0.2652 0.1064

0.645 0.7201 0.2652 0.5 0.1612

⎞ ⎛ 0.1113 0.5 0.6785 0.3335 ⎟ ⎜ 0.6785 0.5 ⎟ 20,0 ⎜ 0.1064 ⎟ , P = ⎜ 0.8493 0.1192 ⎝ 0.3381 0.122 0.1612 ⎠ 0.5 0.5059 0.0637

0.8493 0.1192 0.5 0.5255 0.6187

0.3381 0.122 0.5255 0.5 0.1412

⎞ 0.5059 0.0637 ⎟ ⎟ 0.6187 ⎟ . 0.1412 ⎠ 0.5

⎛ ⎞ 0.193 0.5 0.093 ⎟ ⎜ 0.4906 ⎟ 12,0 ⎜ 0.7037 ⎟ , P = ⎜ 0.41 ⎝ 0.763 0.3147 ⎠ 0.5

⎞ 0.3129 0.8467 ⎟ ⎟ 0.0636 ⎟ , 0.4546 ⎠

⎞ 0.1739 0.9298 ⎟ ⎟ 0.3563 ⎟ , 0.1457 ⎠ 0.5

128

8 Practical Applications

The trust sociometrix is presented below. ⎛− 0.32 ⎜ 0.84 ⎜ ⎜ 0.64 ⎜ ⎜ 0.82 ⎜ 0.79 ⎜ ⎜ 0.47 ⎜ ⎜ 0.31 ⎜ ⎜ 0.69 ⎜ 0.99 .T SoM = ⎜ ⎜ 0.77 ⎜ ⎜ 0.83 ⎜ 0.71 ⎜ ⎜ 0.60 ⎜ ⎜ 0.75 ⎜ 0.50 ⎜ ⎜ 0.37 ⎜ ⎝ 0.27 0.67 0.30

0.55 − 0.31 0.67 0.61 0.78 0.57 0.61 0.58 0.54 0.87 0.51 0.79 0.47 0.83 0.32 0.98 0.28 0.27 0.75

0.83 0.92 − 0.22 0.54 0.26 0.50 0.95 0.08 0.71 0.29 0.40 0.17 0.83 0.67 0.65 0.70 0.93 0.69 0.57

0.38 0.63 0.36 − 0.27 0.47 0.50 0.91 0.51 0.34 0.57 0.49 0.36 0.58 0.88 0.46 0.44 0.38 0.56 0.54

0.77 0.28 0.52 0.04 − 0.66 0.35 0.35 0.25 0.95 0.30 0.46 0.36 0.74 0.71 0.70 0.71 0.37 0.90 0.32

0.60 0.30 0.03 0.53 0.82 − 0.86 0.38 0.34 0.24 0.32 0.98 0.55 0.75 0.84 0.37 0.90 0.61 0.75 0.73

0.72 0.33 0.15 0.28 0.53 0.86 − 0.81 0.53 0.96 0.37 0.54 0.48 0.48 0.68 0.31 0.61 0.33 0.88 0.33

0.50 0.66 0.65 0.70 0.56 0.39 0.45 − 0.83 0.79 0.71 0.47 0.71 0.96 0.51 0.41 0.79 0.35 0.53 0.46

0.62 0.62 0.12 0.15 0.38 0.74 0.41 0.83 − 0.4 0.35 0.37 0.25 0.32 0.49 0.26 0.94 0.48 0.54 0.27

0.40 0.26 0.82 0.58 0.40 0.90 0.68 0.66 0.99 − 0.42 0.15 0.58 0.38 0.66 0.75 0.56 0.71 0.49 0.62

0.27 0.65 0.38 0.85 0.38 0.26 0.26 0.62 0.52 0.41 − 0.86 0.86 0.28 0.62 0.78 0.56 0.92 0.38 0.46

0.8 0.31 0.36 0.48 0.85 0.58 0.59 0.93 0.18 0.65 0.81 − 0.48 0.27 0.26 0.48 0.37 0.45 0.37 0.26

0.27 0.61 0.12 0.27 0.80 0.71 0.21 0.78 0.20 0.69 0.39 0.65 − 0.26 0.34 0.40 0.51 0.45 0.39 0.58

0.61 0.71 0.40 0.38 0.92 0.75 0.29 0.80 0.64 0.50 0.61 0.70 0.38 − 0.29 0.46 0.34 0.86 0.42 0.41

0.96 0.75 0.68 0.26 0.30 0.38 0.50 0.57 0.98 0.49 0.40 0.69 0.26 0.07 − 0.67 0.30 0.53 0.27 0.88

0.40 0.30 0.85 0.15 0.29 0.28 0.58 0.25 0.37 0.58 0.29 0.36 0.12 0.86 0.35 − 0.85 0.26 0.37 0.62

0.92 0.60 0.70 0.30 0.39 0.25 0.44 0.47 0.65 0.28 0.52 0.25 0.30 0.65 0.89 0.86 − 0.90 0.89 0.26

0.97 0.62 0.27 0.26 0.66 0.55 0.25 0.24 0.26 0.36 0.63 0.99 0.27 0.76 0.89 0.47 0.96 − 0.48 0.32

0.88 0.64 0.30 0.72 0.34 0.34 0.29 0.69 0.38 0.50 0.76 0.35 0.73 0.70 0.46 0.18 0.34 0.27 − 0.92

0.47 ⎞ 0.36 0.32 ⎟ ⎟ 0.88 ⎟ ⎟ 0.51 ⎟ 0.83 ⎟ ⎟ 0.90 ⎟ ⎟ 0.72 ⎟ ⎟ 0.38 ⎟ 0.30 ⎟ ⎟. 0.69 ⎟ ⎟ 0.88 ⎟ 0.92 ⎟ ⎟ 0.38 ⎟ ⎟ 0.48 ⎟ 0.33 ⎟ ⎟ 0.35 ⎟ ⎟ 0.88 ⎠ 0.29 −

Stage 1: Clustering process. Step 1: Apply Algorithm 2 to classify the 20 experts into 4 clusters. The trust constraint stipulates that a pair of experts with a low average trust degree cannot be assigned to the same subgroup. Let the threshold of trust constraint be denoted as . T C such that .0 ≤ T C ≤ 1. If the average trust degree of a pair of experts is less than . T C, i.e., .atsolh < T C, then it indicates that they do not satisfy the trust constraint. In other words, .el and .eh are cannot-linked. Let .Con /= = (Con /=lh )m×m be a constraint matrix including a set of CL constraints, such that { Con /=lh =

.

1, i f atsolh < T C 0, other wise

Since .Con /= is a symmetric matrix, we only consider the trust constraints in the upper triangle region. To visualize the trust constraints among experts, the constraint matrix can be transformed into an undirected graph .G(E, V ), where . E = {e1 , e2 , . . . , em } is a set of nodes, and . V = {(e1 , e2 ), (e1 , e3 ), . . . , (em−1 , em )} is the set of undirected edges corresponding to trust constraints, where .(el , eh ) represents an undirected link from .el to .eh such that .l, h = 1, 2, . . . , 20. Let .ξ be number of .Con /=lh = 1, satisfying that .0 ≤ ξ ≤ m(m − 1)/2. DMs .e3 , e9 , e13 and the edges among them with each other form an undirected sub-graph .G , (E , , V , ) (see Fig. 8.3), where . E , = {e3 , e9 , e13 }, .V , = {(e3 , e9 ), (e3 , e13 ), (e9 , e13 ), } and .q , = 3. Since we have that .ξ , = q , (q , − 1)/2 = 3 for the DMs, the FPRs of these three experts are first set as the initial cluster centers. Then, as . K > q , , we use the max-min method [2] to determine the FPR of expert .e1 as the fourth initial cluster center. Therefore, the initial preferences of clusters are obtained as follows. ⎛

0.5 ⎜ 0.5171 ⎜ 1,0 .H = ⎜ 0.5092 ⎝ 0.677 0.2596

0.5171 0.5 0.2614 0.7275 0.3766

0.5092 0.2614 0.5 0.4859 0.4046

0.677 0.7275 0.4859 0.5 0.5566

⎞ ⎛ 0.2596 0.5 0.5985 0.3766 ⎟ ⎜ 0.5985 0.5 ⎟ ⎜ 2,0 0.4046 ⎟ , H = ⎜ 0.5652 0.3542 ⎝ 0.3876 0.4992 0.5566 ⎠ 0.5 0.5973 0.303

0.5652 0.3542 0.5 0.2895 0.2856

0.3876 0.4992 0.2895 0.5 0.4479

⎞ 0.5973 0.303 ⎟ ⎟ 0.2856 ⎟ , 0.4479 ⎠ 0.5

8.2 Application of PDCRM in Social Capital Selection e1

129

e19

e7

e4

e16

e6

e13

e5

e3

e10 e8

e11

e9

e14 e15

e12

e2

e17 e18 e20

Fig. 8.3 Visualization of cannot-links with the condition that .T C = 0.25 Table 8.15 Clustering results Cluster Structure .C

1

.e3 , .e6 , .e8 , .e11 , .e17 , .e19

.C

2

.e9 , .e14 , .e16 , .e20

.C

3

.e1 , .e5 , .e7 , .e10 , .e15

.C

4

.e2 , .e4 , .e12 , .e13 , .e18



0.5 ⎜ 0.2522 ⎜ 3,0 .H = ⎜ 0.5535 ⎝ 0.2773 0.6618

0.2522 0.5 0.7561 0.6294 0.358

0.5535 0.7561 0.5 0.6764 0.4694

0.2773 0.6294 0.6764 0.5 0.3438 ⎛

Size

Cohesion degree

Overall Weight trust degree

6 4 5 5

0.653 0.5861 0.6593 0.6609

0.5611 0.4943 0.6928 0.6804

⎞ ⎛ 0.6618 0.5 0.5219 0.358 ⎟ ⎜ 0.5219 0.5 ⎟ ⎜ 4,0 0.4694 ⎟ , H = ⎜ 0.6675 0.4973 ⎝ 0.8083 0.5525 0.3438 ⎠ 0.5 0.4125 0.8674

0.5 ⎜ 0.4537 ⎜ . P G,0 = ⎜ 0.5753 ⎝ 0.5562 0.4692

0.4537 0.5 0.4855 0.6157 0.5002

0.5753 0.4855 0.5 0.5698 0.3645

0.5562 0.6157 0.5698 0.5 0.4179

0.6675 0.4973 0.5 0.688 0.2593

0.8083 0.5525 0.688 0.5 0.3421

0.2786 0.1469 0.2895 0.285

⎞ 0.4125 0.8674 ⎟ ⎟ 0.2593 ⎟ , 0.3421 ⎠ 0.5

⎞ 0.4692 0.5002 ⎟ ⎟ 0.3645 ⎟ . 0.4179 ⎠ 0.5

Step 2: Use Eq. (4.9) to obtain the weights of the clusters. Table 8.15 shows the clustering results, including the structure, size, cohesion degree, overall trust degree, and weight of each cluster. Stage 2: Consensus-reaching process. First consensus iteration Step 3: Use Eq. (7.3) to obtain group consensus degree as .GC L 0 = 0.8729. As 0 . GC L < GC L, the feedback mechanism is implemented. Step 4: The following consensus scenarios are identified: • . I C L(H 1,0 ) = 0.8818 > GC L, .otso0 (C 1 ) = 0.5611 > T L, then .C 1 ∈ H H , • . I C L(H 2,0 ) = 0.8714 < GC L, .otso0 (C 2 ) = 0.4943 < T L, then .C 2 ∈ L L, • . I C L(H 3,0 ) = 0.8593 < GC L, .otso0 (C 3 ) = 0.6928 > T L, then .C 3 ∈ L H , • . I C L(H 4,0 ) = 0.879 < GC L, .otso0 (C 4 ) = 0.6804 > T L, then .C 4 ∈ L H .

130

8 Practical Applications

The adjustment strategies are generated to guided experts to modify their preferences. As .C 1 ∈ H H , there is no need to modify its preference. (1) Because .C 2 ∈ L L, the strategy of S-LL is implemented as follows: (a) According to Eq. (7.15), the positions that should be adjusted are as follows: 2,0,L L .P O S = {(1, 2), (1, 4), (1, 5), (2, 3), (2, 5), (3, 4)}. 2,0 (b) The punishment coefficients are computed by Eq. (7.10) as follows: . pc12 = 2,0 2,0 2,0 2,0 0.1678, . pc14 = 0.235, . pc15 = 0.0956, . pc23 = 0.1133, . pc25 = 0.2963, 2,0 . pc34 = 0.4268. Apply Eq. (7.16) to adjust the identified positions and obtain the adjusted preference, denoted as . H 2,1 = (h i2,1 j )5×5 . (2) Since .C 3 , C 4 ∈ L H , the following strategy is implemented. (a) The identified positions are . P O S 3,0,L H = {(1, 2), (1, 4), (1, 5), (2, 3), (2, 5)} and . P O S 4,0,L H = {(1, 4), (2, 5)} based on Eq. (7.9). 3,0 3,0 3,0 (b) Calculate punishment coefficients:. pc12 = 0.3044,. pc14 = 0.4249,. pc15 = 3,0 3,0 4,0 4,0 0.2872, . pc23 = 0.4137, . pc25 = 0.1591, . pc14 = 0.3874, . pc25 = 0.53. (c) Clusters .C 3 and .C 4 provide trust loss coefficients: .π 0 (C 3 ) = 0.5, .π 0 (C 4 ) = 0.5. According to Eq. (7.12), the updated punishment coefficients can 3,0 3,0 3,0 = 0.262, .apc14 = 0.3658, .apc15 = 0.2473, be obtained as follow: .apc12 3,0 3,0 4,0 4,0 .apc23 = 0.3561, .apc25 = 0.1369, .apc14 = 0.3361, .apc25 = 0.4597 Then, use Eq. (7.13) to adjust the above positions and obtain the prefer3,1 4,1 )5×5 and . H 4,1 = (h lh )5×5 . ences . H 3,1 = (h lh ⎛

0.5 ⎜ 0.5171 ⎜ . H 1,1 = ⎜ 0.5092 ⎝ 0.677 0.2596 ⎛

0.5 ⎜ 0.3136 ⎜ 3,1 .H = ⎜ 0.5535 ⎝ 0.3958 0.6065

0.5171 0.5 0.2614 0.7275 0.3766

0.5092 0.2614 0.5 0.4859 0.4046

0.677 0.7275 0.4859 0.5 0.5566

⎞ ⎛ 0.2596 0.5 0.3766 ⎟ ⎜ 0.5742 ⎟ ⎜ 0.4046 ⎟ , H 2,1 = ⎜ 0.5652 ⎝ 0.4272 0.5566 ⎠ 0.5 0.585

0.5742 0.5 0.369 0.4992 0.3614

0.5652 0.369 0.5 0.4901 0.2856

0.4272 0.4992 0.4091 0.5 0.4479

0.3136 0.5 0.6441 0.6294 0.3806

0.5535 0.6441 0.5 0.6764 0.4694

0.3958 0.6294 0.6764 0.5 0.3438

⎞ ⎛ 0.6065 0.5 0.5219 0.3806 ⎟ ⎜ 0.5219 0.5 ⎟ ⎜ 4,1 0.4694 ⎟ , H = ⎜ 0.6675 0.4973 ⎝ 0.7106 0.5525 0.3438 ⎠ 0.5 0.4125 0.6728

0.6675 0.4973 0.5 0.688 0.2593

0.7106 0.5525 0.688 0.5 0.3421

⎞ 0.585 0.3614 ⎟ ⎟ 0.2856 ⎟ , 0.4479 ⎠ 0.5 ⎞ 0.4125 0.6728 ⎟ ⎟ 0.2593 ⎟ , 0.3421 ⎠ 0.5

Second consensus iteration The weights of the clusters are updated as .σ 1 = (0.3022, 0.1593, 0.2704, 0.2682)T . Use Eq. (7.13) to calculate group consensus degree as .GC L 1 = 0.8968. As .GC L 1 > GC L, it means that the group reaches an acceptable consensus after one time of consensus iteration. Step 5: Output the final iteration time as .t ∗ = 1. The adjusted preferences presented above are regarded as the final clusters’ preferences. Stage 3: Selection process. Step 6: The final group preference is obtained by aggregating clusters’ preferences as follows:

8.3 Application of CDOOC Clustering Method in Car-Sharing …



.

P G,1

0.5 ⎜ 0.5298 ⎜ =⎜ ⎜ 0.4275 ⎝ 0.4308 0.5517

0.4702 0.5 0.5505 0.3823 0.5393

0.5725 0.4495 0.5 0.4207 0.6358

0.5692 0.6177 0.5793 0.5 0.5758

131

⎞ 0.4483 0.4607 ⎟ ⎟ 0.3642 ⎟ ⎟. 0.4242 ⎠ 0.5

By using the normalizing rank aggregation method [5], we can obtain the final ranking of the five social capitals as .x5 ≻ x1 ≻ x2 ≻ x3 ≻ x4 . Therefore, .x5 is the optimal social capital.

8.3 Application of CDOOC Clustering Method in Car-Sharing Service Provider Selection With the support of Internet technology and various network platforms, the sharing economy is developing rapidly and gradually becoming a new economic form accepted by the public. Car sharing, as a new mode of transportation sharing, has gained popularity in some major cities in China, and many car sharing platforms have emerged, such as EVCard, Gofun, PandAuto, etc. For the government, choosing an appropriate car-sharing platform requires the consideration of many attributes such as price, quality, and convenience, as well as the participation of large-scale DMs. Car sharing is becoming an important option to meet the travel needs of Chinese people under the dual pressure of economic downturn and limited road resources. Despite a nearly 30% decline in active users in February 2020 due to COVID-2019, the government and the market are optimistic about the medium- and long-term development of China’s car-sharing industry. Pingyao, a World Heritage Site in Shanxi Province, China, attracts millions of tourists every year. The local government decided to introduce a car-sharing platform because it believed that public transportation was severely overwhelmed with demand. Five car-sharing platforms have responded positively to the government’s demand and expressed their willingness to participate in the construction of carsharing platforms. Given financial constraints and the size of the city, the government could only choose one of the platforms. As it concerns the public interest and urban construction, the government invited 20 experts from various departments, including transportation sector, finance bureau and universities, to participate in the decision-making. Three comprehensive attributes are used: security (.a1 ), price (.a2 ), and comfort (.a3 ). The weight vector of the attributes is set as .ω = (0.5, 0.3, 0.2)T . The experts provide their evaluations by means of PLTSs, in which the linguistic evaluation scale . S = {s−3 = E xtr emely poor, s−2 = Poor, s−1 = good, s2 = Good, s3 = Somewhat poor, s0 = N eutral, s1 = Somewhat E xtr emely good} is used. This section only adopts the opinion punishment to manage non-cooperative behaviors. To solve the problem, the following steps are required.

132

8 Practical Applications

Input: Initial individual opinions .V l (l = 1, 2, . . . , 20), consensus threshold . GC L = 0.83, and other parameters .q = 5, .q = 2, .ρ = 1, .α1 = 0.4, .α2 = 0.3, .α3 = 0.3. Setting .ρ = 1 indicates that distance measures will be used as the only clustering measurement attribute. The individual opinions are presented as follows. ⎛ .

V1 =

{s−2 (0.2), s0 (0.3), s2 (0.4), s3 (0.1)} −1 (0.25), s2 (0.75)} ⎜ {s ⎝ {s−1 (0.3), s1 (0.7)} {s−1 (0.2), s0 (0.2), s2 (0.6)} {s−1 (0.5), s1 (0.5)}

( {s−3 (0.7), s−1 (0.2), s0 (0.1)} .

{s−2 (0.4), s0 (0.3), s3 (0.3)} {s−3 (0.1), s−1 (0.1), s0 (0.1), s2 (0.5), s3 (0.2)} {s0 (0.3), s2 (0.7)} {s−2 (0.4), s1 (0.5), s3 (0.1)}

V2 =

( {s−3 (0.3), s−1 (0.5), s3 (0.2)} .

{s−2 (0.3), s0 (0.7)} {s0 (0.4), s1 (0.6)} {s−2 (0.2), s−1 (0.3), s1 0.5} {s−2 (0.3), s1 (0.7)} {s−2 (0.2), s0 (0.3), s3 0.5}

{s−3 (0.1), s−2 (0.1), s0 (0.5), s1 (0.3)} {s−3 (0.6), s0 (0.3), s3 (0.1)} {s−3 (0.1), s2 (0.7), s0 (0.2)} {s−3 (0.3), s0 (0.5), s2 (0.2)}

V = 3



{s−1 (0.3), s2 (0.4), s3 0.3} {s−2 (0.25), s−1 (0.25), s1 0.5} ⎟ {s−3 (0.4), s−1 (0.3), s1 0.3} ⎠ . {s−1 (0.6), s1 (0.4)} {s2 (0.3), s3 (0.7)}

{s−3 (0.3), s−1 (0.6), s2 (0.1)} {s0 (0.3), s2 (0.3), s3 (0.4)} {s−3 (0.3), s−1 (0.3), s0 0.3, s3 (0.1)} {s−3 (0.4), s−1 (0.6)} {s−1 (0.3), s2 (0.3), s3 0.4}

{s−1 (0.6), s2 (0.3), s3 (0.1)} {s−1 (0.7), s2 (0.3)} {s0 (0.2), s2 (0.5), s3 0.3} {s−3 (0.15), s−2 (0.3), s−1 (0.55)} {s−3 (0.3), s0 (0.3), s1 0.4}

{s−3 (0.3), s2 (0.7)} {s−2 (0.4), s0 (0.3), s2 0.3} {s−1 (0.2), s1 (0.3), s2 0.5} {s−1 (0.3), s1 (0.5), s2 (0.2)} {s−1 (0.2), s2 (0.8)}

{s0 (0.2), s1 (0.5), s2 (0.3)} {s−3 (0.5), s−2 (0.3), s3 0.2} {s−2 (0.2), s−1 (0.1), s1 0.5, s2 (0.2)} {s−2 (0.6), s2 (0.4)} {s0 (0.1), s1 (0.3), s3 (0.6)}

)

)



{s−3 (0.1), s−2 (0.1), s−1 (0.1), ⎜ s (0.1), s (0.1), s (0.1), 1 2 ⎜ 0 ⎜ s (0.4)} ⎜ 3 ⎜ .V 4 = ⎜ {s0 (0.5), s1 (0.5)} ⎜ ⎜ {s−2 (0.3), s3 (0.7)} ⎜ ⎝ {s−1 (0.3), s1 (0.2), s2 (0.5)} {s−3 (0.3), s−2 (0.2), s1 (0.5)}

V5 = ⎝

{s−2 (0.2), s1 (0.4), s3 (0.4)} {s−2 (0.2), s−1 (0.2), s0 (0.2), s1 (0.3), s2 (0.1)} {s−3 (0.5), s−2 (0.3), s1 0.4} {s−3 (0.5), s−1 (0.3), s0 (0.2)} {s−3 (0.5), s−2 (0.3), s1 0.2}

{s−2 (0.1), s−1 (0.1), s0 (0.5), s2 (0.3)} {s−1 (0.2), s0 (0.5), s1 (0.3), s2 (0.1)} {s2 (1)} {s−3 (0.1), s−2 (0.1), s−1 (0.1), s0 (0.1), s1 (0.1), s2 (0.1), s3 (0.4)}

⎛ .

V6 =

{s−2 (0.4), s0 (0.3), s1 (0.3)} −3 (0.3), s−2 (0.3), s0 (0.4)} ⎜ {s ⎝ {s−1 (0.5), s1 (0.25), s2 (0.25)} {s−1 (0.2), s0 (0.2), s2 (0.6)} {s−3 (0.1), s−1 (0.2), s1 (0.7)}

⎟ ⎟ {s−3 (0.3), s−2 (0.2), s−1 (0.2), s1 (0.3)} ⎟ ⎟ ⎟ {s−3 (0.3), s−2 (0.1), s0 0.2, s−1 (0.4)} ⎟ . ⎟ ⎟ {s0 (0.25), s2 (0.75)} ⎟ ⎠ {s−3 (0.3), s−2 (0.5), s0 (0.2)} {s−3 (0.3), s−2 (0.2), s−1 (0.3), s2 (0.2)}

{s−1 (0.2), s1 (0.3), s3 (0.5)} {s−2 (0.25), s0 (0.55), s3 (0.2)} {s0 (1)} {s−1 (0.8), s2 (0.2)}

{s−2 (0.4), s−1 (0.2), s0 (0.1), s1 (0.3)} {s1 (1) {s−1 (0.3), s1 (0.7)} {s0 (0.3), s1 (0.3), s2 (0.4)}

{s−1 (0.3), s1 (0.7)}

{s2 (1)}

{s−2 (0.3), s−1 (0.4), s2 (0.3)} {s0 (0.4), s1 (0.6)} {s−1 (0.4), s1 (0.3), s2 (0.3)} {s−3 (0.2), s−2 (0.6), s0 (0.2)} {s−3 (0.2), s−2 (0.6), s0 (0.2)}



.

V =

{s−2 (0.3), s1 (0.2), s3 (0.5)} {s−2 (0.4), s−1 (0.3), s1 (0.3)} {s−1 (0.35), s1 (0.65)} {s−1 (0.2), s0 (0.2), s2 (0.2), s3 (0.4)}

{s2 (1)} {s0 (0.7), s3 (0.3)} {s0 (0.1), s2 (0.1), s3 (0.8)} {s0 (0.5), s3 (0.5)}

⎞ ⎠.

{s−3 (0.3, s1 (0.7)} {s−2 (0.3), s−1 (0.2), s0 (0.5) ⎟ {s−2 (0.2), s1 (0.6), s3 (0.2)} ⎠ . {s−2 (0.3), s0 (0.5), s2 (0.2)} {s−1 (0.2), s3 (0.8)}

( {s−3 (0.2), s0 (0.2), s2 (0.2), s3 (0.4)} {s−3 (0.1), s−1 (0.4), s1 (0.5)} {s−2 (0.2), s0 (0.7), s1 (0.1)} ) 7

.



⎛ {s−3 (0.2), s−1 (0.4), s3 (0.4)} .

.

{s0 (0.4), s3 (0.6)} {s−3 (0.1), s−2 (0.4), s0 (0.5)} {s−2 (0.1), s0 (0.9)} {s0 (0.4), s2 (0.4), s3 (0.2)}

.

8.3 Application of CDOOC Clustering Method in Car-Sharing …

⎛ .

{s−1 (0.3), s1 (0.7)} {s−1 (0.1), s2 (0.9)} ⎜ {s ⎝ −1 (0.2), s1 (0.2), s2 (0.6)} {s−1 (0.7), s2 (0.3)} {s0 (0.3), s1 (0.5), s2 (0.2)}

V8 =

⎛ {s−3 (0.3), s1 (0.7)} .

⎜ V9 = ⎝

{s−3 (0.1), s−2 (0.1), s−1 (0.1), s0 (0.1), s1 (0.1), s2 (0.1), s3 (0.4)} {s−2 (0.3), s3 (0.7)} {s−1 (0.3), s1 (0.2), s2 (0.5)} {s−3 (0.3), s−2 (0.7)}

{s−1 (0.2), s1 (0.8)} {s−2 (0.3), s0 (0.5), s2 (0.2)} {s0 (1)} {s−1 (0.2), s1 (0.6), s2 (0.2)} {s−1 (0.3), s1 (0.3), s2 (0.4)}

V 10 =

.

V

11

⎜ ⎜ =⎜ ⎜ ⎝

s3 (0.5)} {s−3 (0.1), s−2 (0.1), s−1 (0.1), s0 (0.1), s1 (0.1), s2 (0.1), s3 (0.4)} {s−2 (0.3), s3 (0.7)} {s−1 (0.3), s1 (0.2), s2 (0.5)} {s−3 (0.3), s−2 (0.7)}

{s−2 (0.2), s−1 (0.2), s0 (0.2), s1 (0.2), s2 (0.2)} {s−3 (0.3), s−2 (0.3), s1 (0.4)} {s−3 (0.5), s−1 (0.3), s0 (0.2)} {s−3 (0.2), s−2 (0.6), s1 (0.2)}

{s−2 (0.1), s−1 (0.1), s0 (0.4), s1 (0.4)} {s0 (0.5), s2 (0.5)} {s−3 (0.3), s0 (0.3), s2 (0.4)} {s−2 (0.4), s−1 (0.3), s0 (0.3)}

{s−3 (0.5), s−2 (0.2), s0 (0.2)} {s−3 (0.2), s−2 (0.2), s1 (0.2), s3 (0.4)}

⎛ .

V 13 =

{s−2 (0.8), s−1 (0.2)} 0 (0.3), s2 (0.7)} ⎜ {s ⎝ {s−3 (0.1), s−1 (0.2), s1 (0.7)} {s0 (0.2), s2 (0.8)} {s−2 (0.2), s0 (0.5), s2 (0.3)}

⎛ .

V 14 =





{s−3 (0.3), s−2 (0.5), s−1 (0.2)}

⎟ ⎟ ⎟. ⎟ ⎠

{s−1 (0.15), s0 (0.35), s2 (0.5)} −2 (0.7), s−1 (0.1), s0 (0.1)} ⎜ {s ⎝ {s−1 (0.47), s3 (0.53)} {s2 (1)} {s0 (0.6), s3 (0.4)}

{s−3 (0.3), s0 (0.3), s2 (0.4)} {s−2 (0.3), s−1 (0.3), s0 (0.3), s3 (0.1)}



{s−2 (0.2), s0 (0.8)} {s−2 (0.2), s0 (0.23), s2 (0.57)} {s1 (0.2), s3 (0.8)} {s−1 (0.4), s0 (0.45), s3 (0.15)} ⎟ {s0 (1)} {s−1 (0.83), s1 (0.17)} {s0 (0.2), s2 (0.8)} {s−2 (0.2), s0 (0.2), s1 (0.6)} {s0 (0.51), s3 (0.49)} {s0 (0.33), s2 (0.67)}

{s1 (1)} {s0 (0.2), s3 (0.8)} {s0 (0.3), s3 (0.7)} {s1 (0.3), s3 (0.7)} {s−2 (0.2), s0 (0.5), s1 (0.3)}

{s−2 (0.3), s0 (0.4), s1 (0.3)} {s−1 (0.5), s2 (0.5)} {s−3 (0.1), s0 (0.23), s2 (0.67)} {s−3 (0.76), s0 (0.24)} {s−3 (0.4), s−1 (0.2), s0 (0.4)}

⎟ ⎠.

{s−2 (0.2), s−1 (0.3), s1 (0.5)} {s−1 (0.9), s0 (0.1)} ⎟ {s−1 (0.7), s1 (0.3)} ⎠. {s−1 (1)} {s−2 (0.8), s1 (0.2)}

{s−2 (0.2), s−1 (0.2), s0 (0.2), s1 (0.2), s2 (0.2)} {s−1 (0.2), s0 (0.4), s1 (0.4)} {s−3 (0.3), s−2 (0.3), s1 (0.3), s3 (0.1)} {s0 (0.5), s2 (0.5)}

−1 (0.5), s1 (0.5)} {s3 (1)} ⎜ {s−1 (0.6), s1 (0.4)} ⎜ {s (0.1), s (0.2), s (0.1), ⎜ −3 −2 −1 ⎝ s0 (0.1), s1 (0.1), s2 (0.1), s3 (0.3)} {s−1 (0.5), s1 (0.4), s3 (0.1)}

V 12 =

{s−1 (0.26), s1 (0.74)} {s−1 (0.2), s1 (0.6), s2 (0.2)} {s0 (0.6), s2 (0.4)} {s0 (0.7), s2 (0.3)} {s−1 (0.2), s1 (0.8)}

{s−2 (0.2), s1 (0.6), s3 (0.2)}

⎛ {s .



{s−3 (0.3), s−2 (0.5), s−1 (0.2)}

{s−1 (0.3), s2 (0.6), s3 (0.1)} −1 (0.1), s0 (0.4), s1 (0.4), s2 (0.1)} ⎜ {s ⎝ {s−1 (0.5), s1 (0.5)} {s−1 (0.1), s2 (0.9)} {s−3 (0.7), s1 (0.3)}

⎛ {s−3 (0.3), s1 (0.2),



{s−1 (0.19), s0 (0.1), s1 (0.71)} {s−1 (0.4), s1 (0.6)} ⎟ {s−3 (0.5), s−1 (0.2), s1 (0.3)} ⎠ . {s0 (0.3), s1 (0.3), s2 (0.4)} {s2 (1)}

{s−2 (0.2), s1 (0.6), s3 (0.2)}

⎛ .

133

⎟ ⎟. ⎠



{s−2 (1)} {s0 (1)} ⎟ {s3 (1)} ⎠. {s0 (0.8), s2 (0.2)} {s0 (0.6), s2 (0.3), s3 (0.1)}



{s0 (0.1), s2 (0.9)} {s0 (0.3), s2 (0.7)} ⎟ {s−1 (0.8), s1 (0.2)} ⎠. {s−2 (0.86), s1 (0.14)} {s0 (0.25), s1 (0.5), s2 (0.25)}

134

8 Practical Applications

⎛ .

{s−2 (0.6), s−1 (0.2), s2 (0.2)} 0 (0.1), s2 (0.2), s3 (0.7)} ⎜ {s ⎝ {s−1 (0.7), s3 (0.3)} {s−2 (0.3), s0 (0.5), s2 (0.2)} {s0 (0.6), s3 (0.4)}

V 15 =

{s−2 (0.3), s1 (0.7)} {s−1 (0.15), s2 (0.85)} {s−3 (0.1), s0 (0.2), s2 (0.7)} {s−3 (0.2), s0 (0.2), s3 (0.6)} {s−3 (0.6), s0 (0.4)}

⎛ .

V 16 =

{s−3 (0.2), s0 (0.2), s2 (0.6)} {s−2 (0.3), s1 (0.2), s3 (0.5)} ⎜ {s ⎝ −2 (0.4), s−1 (0.3), s1 (0.3)} {s−1 (0.35), s1 (0.65)} {s−1 (0.2), s0 (0.6), s2 (0.2)}

⎛ .

V 17 =

{s−3 (0.2), s−1 (0.2), s0 (0.6)} {s0 (0.3), s3 (0.7)} ⎜ {s ⎝ 0 (0.4), s2 (0.4), s3 (0.2)} {s0 (0.3), s2 (0.7)} {s−2 (0.4), s1 (0.6)}

{s−3 (0.1), s−1 (0.4), s1 (0.5)} {s2 (1)} {s0 (0.7), s3 (0.3)} {s0 (0.1), s2 (0.1), s3 (0.8)} {s3 (1)}

{s−3 (0.3), s−1 (0.6), s1 (0.1)} {s0 (0.4), s2 (0.4), s3 (0.2)} {s−3 (0.4), s−1 (0.3), s0 (0.3)} {s−3 (0.4), s−1 (0.6)} {s2 (0.6), s3 (0.4)}

⎛ .

V 18 =

{s−2 (0.8), s−1 (0.2)} 0 (1)} ⎜ {s ⎝ {s−3 (0.1), s−1 (0.2), s1 (0.7)} {s0 (1)} {s−2 (0.2), s2 (0.8)}

⎛ .

V 19 =

{s−3 (0.3), s−1 (0.1), s1 (0.6)} 3 (1)} ⎜ {s ⎝ {s−1 (1)} {s0 (0.9), s2 (0.1)} {s2 (1)}

⎛ {s−3 (0.1), s−1 (0.2), s1 (0.1), .

V 20 = ⎝

s2 (0.4), s3 (0.2)} {s−1 (0.5), s2 (0.5)} {s3 (1)} {s−2 (1)} {s−2 (0.45), s0 (0.55)}



{s0 (1)} {s0 (1)} ⎟ {s−1 (1)} ⎠. {s−2 (0.25), s1 (0.75)} {s−2 (0.2), s0 (0.3), s1 (0.2), s3 (0.3)}

{s1 (1)} {s0 (0.2), s3 (0.8)} {s0 (0.3), s3 (0.7)} {s1 (1)} {s2 (0.2), s1 (0.8)}

{s−1 (0.5), s1 (0.5)} {s0 (1)} {s−1 (1)} {s1 (1)} {s0 (0.2), s2 (0.8)}

{s−1 (0.25), s1 (0.75)} {s−1 (0.2), s1 (0.8)} {s−2 (0.8), s0 (0.1), s1 (0.1)} {s−1 (0.5), s1 (0.5)} {s−1 (0.3), s1 (0.3), s3 (0.6)}



{s−2 (0.2), s0 (0.7), s1 (0.1)} {s0 (0.4), s3 (0.6)} ⎟ {s−3 (0.1), s−2 (0.4), s0 (0.5)} ⎠ . {s−2 (0.1), s0 (0.9)} {s0 (0.4), s2 (0.4), s3 (0.2)}



{s−3 (0.3), s1 (0.7)} {s−2 (0.4), s0 (0.3), s2 (0.3)} ⎟ {s−1 (0.6), s1 (0.3), s2 (0.1)} ⎠ . {s−1 (0.3), s1 (0.5), s2 (0.2)} {s−1 (1)}



{s−2 (1)} {s0 (1)} ⎟ {s3 (1)} ⎠. {s0 (0.8), s2 (0.2)} {s0 (0.6), s2 (0.3), s3 (0.1)}



{s−2 (0.1), s0 (0.2), s2 (0.7)} {s−1 (1)} ⎟ {s0 (1)} ⎠. {s−1 (1)} {s0 (0.7), s2 (0.3)}

{s−2 (0.1), s1 (0.9)} {s−3 (0.2), s−1 (0.4), s1 (0.4)} {s−2 (0.1), s0 (0.7), s1 (0.2)} {s−1 (0.4), s1 (0.2), s3 (0.4)} {s−3 (0.1), s−2 (0.2), s0 (0.2), s2 (0.5)}

⎞ ⎠.

Step 1: Normalize the individual opinions as . R l (l = 1, 2, . . . , 20). Step 2: Use Algorithm 3 to classify the large group into six clusters by setting the clustering threshold as .C T = 0.77. Table 8.16 shows the clustering results. The opinion of cluster .C 1 is presented as follows. .H

1 (1, 1)

.H

1 (1, 2)

= {s−3 (0.28), s−2 (0.04), s−1 (0.08), s0 (0.2), s1 (0.12), s2 (0.08), s3 (0.2)}, = {s−3 (0.12), s−2 (0.1), s−1 (0.28), s0 (0.14), s1 (0.18), s2 (0.04), s3 (0.14)}, 1 . H (1, 3) = {s−3 (0.18), s−2 (0.18), s−1 (0.14), s0 (0.02), s1 (0.34), s2 (0.08), s3 (0.06)}, 1 . H (2, 1) = {s−3 (0.02), s−2 (0.12), s−1 (0.09), s0 (0.24), s1 (0.02), s2 (0.23), s3 (0.28)}, 1 . H (2, 2) = {s−3 (0), s−2 (0.09), s−1 (0.04), s0 (0.37), s1 (0.16), s2 (0.24), s3 (0.1)}, 1 . H (2, 3) = {s−3 (0), s−2 (0.21), s−1 (0.09), s0 (0.2), s1 (0.38), s2 (0.12), s3 (0)}, 1 . H (3, 1) = {s−3 (0.02), s−2 (0.06), s−1 (0.12), s0 (0.2), s1 (0.2), s2 (0.18), s3 (0.22)}, 1 . H (3, 2) = {s−3 (0.2), s−2 (0.1), s−1 (0.18), s0 (0.32), s1 (0.16), s2 (0), s3 (0.04)}, 1 . H (3, 3) = {s−3 (0.08), s−2 (0), s−1 (0.28), s0 (0.1), s1 (0.32), s2 (0.22), s3 (0)}, 1 . H (4, 1) = {s−3 (0), s−2 (0), s−1 (0.1), s0 (0.16), s1 (0.04), s2 (0.7), s3 (0)},

8.3 Application of CDOOC Clustering Method in Car-Sharing …

135

Table 8.16 Clustering results when setting .q = 2, .q = 5 and .C T = 0.77 k

.z

.C

1st 2nd 3rd 4th 5th 6th

1 .C .C

2

.C

3

4 .C

.H

1 (4, 2)

.H

1 (4, 3)

.C

5

.C

6

.qk

.el

.C L I n

.C L E x



5 4 3 2 2 2

.e11 , .e17 , .e1 , .e2 , .e5

0.8091 0.783 0.8384 0.7887 0.784 0.9254

0.8735 0.8712 0.8383 0.8506 0.8468 0.8061

0.2089 0.1891 0.17 0.1426 0.1421 0.1473

.e15 , .e6 , .e10 , .e9 .e7 , .e16 , .e12 .e8 , .e20 .e3 , .e4 .e13 , .e18

k

k

k

= {s−3 (0.26), s−2 (0.06), s−1 (0.46), s0 (0.04), s1 (0.14), s2 (0.04), s3 (0)}, = {s−3 (0.06), s−2 (0), s−1 (0.24), s0 (0.12), s1 (0.34), s2 (0.24), s3 (0)}, 1 . H (5, 1) = {s−3 (0.08), s−2 (0.32), s−1 (0.12), s0 (0.02), s1 (0.34), s2 (0.02), s3 (0.1)}, 1 . H (5, 2) = {s−3 (0.04), s−2 (0.08), s−1 (0.12), s0 (0.06), s1 (0.18), s2 (0.18), s3 (0.34)}, 1 . H (5, 3) = {s−3 (0), s−2 (0.06), s−1 (0.3), s0 (0.06), s1 (0), s2 (0.05), s3 (0.08)}.

The opinion of cluster .C 2 is presented as follows. .H

2 (1, 1)

.H

2 (1, 2)

= {s−3 (0.075), s−2 (0.25), s−1 (0.125), s0 (0.075), s1 (0.25), s2 (0.2), s3 (0.025)}, = {s−3 (0), s−2 (0.2), s−1 (0.165), s0 (0), s1 (0.51), s2 (0.075), s3 (0.05)}, 2 . H (1, 3) = {s−3 (0.15), s−2 (0.175), s−1 (0.125), s0 (0.25), s1 (0.3), s2 (0), s3 (0)}, 2 . H (2, 1) = {s−3 (0.1), s−2 (0.1), s−1 (0.15), s0 (0.15), s1 (0.125), s2 (0.1), s3 (0.275)}, 2 . H (2, 2) = {s−3 (0), s−2 (0.05), s−1 (0.1375), s0 (0.15), s1 (0.35), s2 (0.3125), s3 (0)}, 2 . H (2, 3) = {s−3 (0), s−2 (0.1), s−1 (0.3), s0 (0.5), s1 (0.1), s2 (0), s3 (0)}, 2 . H (3, 1) = {s−3 (0), s−2 (0.075), s−1 (0.425), s0 (0), s1 (0.1875), s2 (0.0625), s3 (0.25)}, 2 . H (3, 2) = {s−3 (0.1), s−2 (0.075), s−1 (0.1), s0 (0.2), s1 (0.175), s2 (0.35), s3 (0)}, 2 . H (3, 3) = {s−3 (0), s−2 (0.05), s−1 (0.425), s0 (0.125), s1 (0.225), s2 (0.125), s3 (0.05)}, 2 . H (4, 1) = {s−3 (0), s−2 (0.075), s−1 (0.15), s0 (0.175), s1 (0.05), s2 (0.55), s3 (0)}, 2 . H (4, 2) = {s−3 (0.225), s−2 (0.15), s−1 (0.075), s0 (0.325), s1 (0), s2 (0.075), s3 (0.15)}, 2 . H (4, 3) = {s−3 (0.075), s−2 (0.1375), s−1 (0.25), s0 (0.2), s1 (0.1875), s2 (0.15), s3 (0)}, 2 . H (5, 1) = {s−3 (0.275), s−2 (0.175), s−1 (0.05), s0 (0.15), s1 (0.25), s2 (0), s3 (0.1)}, 2 . H (5, 2) = {s−3 (0.25), s−2 (0.3), s−1 (0.1), s0 (0.1), s1 (0.25), s2 (0), s3 (0)}, 2 . H (5, 3) = {s−3 (0), s−2 (0.35), s−1 (0.125), s0 (0.15), s1 (0.1), s2 (0), s3 (0.275)}.

The opinion of cluster .C 3 is presented as follows. . H 3 (1, 1) = {s−3 (0.1333), s−2 (0), s−1 (0.1667), s0 (0.1333), s1 (0.1667), s2 (0.2667), s3 (0.1333)}, . H 3 (1, 2) = {s−3 (0.0667), s−2 (0.0667), s−1 (0.2667), s0 (0.2667), s1 (0.3333), s2 (0), s3 (0)}, .H

3 (1, 3)

.H

3 (2, 1)

= {s−3 (0), s−2 (0.2), s−1 (0), s0 (0.5433), s1 (0.0667), s2 (0.19), s3 (0)}, = {s−3 (0), s−2 (0.2), s−1 (0), s0 (0), s1 (0.1333), s2 (0), s3 (0.6667)}, 3 . H (2, 2) = {s−3 (0), s−2 (0), s−1 (0), s0 (0), s1 (0.0667), s2 (0.0667), s3 (0.2667)}, 3 . H (2, 3) = {s−3 (0), s−2 (0), s−1 (0.3333), s0 (0.4167), s1 (0), s2 (0), s3 (0.45)}, 3 . H (3, 1) = {s−3 (0), s−2 (0.3667), s−1 (0.4), s0 (0), s1 (0.3333), s2 (0), s3 (0)}, 3 . H (3, 2) = {s−3 (0), s−2 (0), s−1 (0), s0 (0.8), s1 (0), s2 (0), s3 (0.2)}, . H 3 (3, 3) = {s−3 (0.0667), s−2 (0.2667), s−1 (0.2767), s0 (0.3333), s1 (0.0567), s2 (0), s3 (0)},

136

8 Practical Applications

. H 3 (4, 1) = {s−3 (0.0333), s−2 (0.0667), s−1 (0.2667), s0 (0.0333), s1 (0.4667), s2 (0.0333), s3 (0.1)}, .H

3 (4, 2)

.H

3 (4, 3)

= {s−3 (0), s−2 (0), s−1 (0), s0 (0.1333), s1 (0), s2 (0.3333), s3 (0.5333)}, = {s−3 (0), s−2 (0.1333), s−1 (0), s0 (0.6667), s1 (0.2), s2 (0), s3 (0)}, 3 . H (5, 1) = {s−3 (0), s−2 (0), s−1 (0.3), s0 (0.2667), s1 (0.1333), s2 (0.1333), s3 (0.1667)}, 3 . H (5, 2) = {s−3 (0), s−2 (0), s−1 (0), s0 (0.3667), s1 (0), s2 (0), s3 (0.6333)}, 3 . H (5, 3) = {s−3 (0), s−2 (0), s−1 (0), s0 (0.3767), s1 (0), s2 (0.49), s3 (0.1333)}.

The opinion of cluster .C 4 is presented as follows. .H

4 (1, 1)

.H

4 (1, 2)

= {s−3 (0.05), s−2 (0), s−1 (0.25), s0 (0), s1 (0.4), s2 (0.2), s3 (0.1)}, = {s−3 (0), s−2 (0), s−1 (0.225), s0 (0), s1 (0.775), s2 (0), s3 (0)}, 4 . H (1, 3) = {s−3 (0), s−2 (0.05), s−1 (0.095), s0 (0.05), s1 (0.805), s2 (0), s3 (0)}, 4 . H (2, 1) = {s−3 (0), s−2 (0), s−1 (0.3), s0 (0), s1 (0), s2 (0.7), s3 (0)}, 4 . H (2, 2) = {s−3 (0), s−2 (0.15), s−1 (0.1), s0 (0.25), s1 (0.4), s2 (0.1), s3 (0)}, 4 . H (2, 3) = {s−3 (0.1), s−2 (0), s−1 (0.4), s0 (0), s1 (0.5), s2 (0), s3 (0)}, 4 . H (3, 1) = {s−3 (0), s−2 (0), s−1 (0.1), s0 (0), s1 (0.1), s2 (0.3), s3 (0.5)}, 4 . H (3, 2) = {s−3 (0), s−2 (0.4), s−1 (0), s0 (0.55), s1 (0.05), s2 (0), s3 (0)}, 4 . H (3, 3) = {s−3 (0.25), s−2 (0.05), s−1 (0.1), s0 (0.35), s1 (0.25), s2 (0), s3 (0)}, 4 . H (4, 1) = {s−3 (0), s−2 (0.5), s−1 (0.35), s0 (0), s1 (0), s2 (0.15), s3 (0)}, 4 . H (4, 2) = {s−3 (0), s−2 (0), s−1 (0.35), s0 (0), s1 (0.55), s2 (0.1), s3 (0)}, 4 . H (4, 3) = {s−3 (0), s−2 (0), s−1 (0.2), s0 (0.15), s1 (0.25), s2 (0.2), s3 (0.2)}, 4 . H (5, 1) = {s−3 (0), s−2 (0.225), s−1 (0), s0 (0.425), s1 (0.25), s2 (0.1), s3 (0)}, 4 . H (5, 2) = {s−3 (0), s−2 (0), s−1 (0.3), s0 (0), s1 (0.3), s2 (0.2), s3 (0.3)}, 4 . H (5, 3) = {s−3 (0.05), s−2 (0.1), s−1 (0), s0 (0.1), s1 (0), s2 (0.75), s3 (0)}.

The opinion of cluster .C 5 is presented as follows. .H

5 (1, 1)

.H

5 (1, 2)

= {s−3 (0.2), s−2 (0.05), s−1 (0.3), s0 (0.05), s1 (0.05), s2 (0.05), s3 (0.3)}, = {s−3 (0), s−2 (0.1), s−1 (0.3), s0 (0), s1 (0.2), s2 (0.15), s3 (0.25)}, 5 . H (1, 3) = {s−3 (0.15), s−2 (0.1), s−1 (0.1), s0 (0.1), s1 (0.4), s2 (0.15), s3 (0)}, 5 . H (2, 1) = {s−3 (0.05), s−2 (0.05), s−1 (0), s0 (0.5), s1 (0.4), s2 (0), s3 (0)}, 5 . H (2, 2) = {s−3 (0), s−2 (0.1), s−1 (0.45), s0 (0.1), s1 (0.15), s2 (0.2), s3 (0)}, 5 . H (2, 3) = {s−3 (0.4), s−2 (0.2), s−1 (0), s0 (0.1), s1 (0.2), s2 (0), s3 (0.1)}, 5 . H (3, 1) = {s−3 (0.3), s−2 (0.15), s−1 (0), s0 (0.15), s1 (0), s2 (0), s3 (0.4)}, 5 . H (3, 2) = {s−3 (0.15), s−2 (0.15), s−1 (0), s0 (0.1), s1 (0.2), s2 (0.25), s3 (0.15)}, 5 . H (3, 3) = {s−3 (0), s−2 (0.1), s−1 (0.05), s0 (0.125), s1 (0.25), s2 (0.475), s3 (0)}, 5 . H (4, 1) = {s−3 (0.05), s−2 (0.35), s−1 (0.15), s0 (0.1), s1 (0.1), s2 (0.25), s3 (0)}, 5 . H (4, 2) = {s−3 (0.325), s−2 (0.15), s−1 (0.425), s0 (0.1), s1 (0), s2 (0), s3 (0)}, 5 . H (4, 3) = {s−3 (0.15), s−2 (0.55), s−1 (0), s0 (0.1), s1 (0), s2 (0.2), s3 (0)}, 5 . H (5, 1) = {s−3 (0), s−2 (0.1), s−1 (0), s0 (0.25), s1 (0.25), s2 (0.1), s3 (0)}, 5 . H (5, 2) = {s−3 (0.4), s−2 (0.15), s−1 (0), s0 (0.15), s1 (0.3), s2 (0), s3 (0)}, 5 . H (5, 3) = {s−3 (0.15), s−2 (0.1), s−1 (0.15), s0 (0.05), s1 (0.15), s2 (0.1), s3 (0.3)}.

The opinion of cluster .C 6 is presented as follows. .H

6 (1, 1)

.H

6 (1, 2)

= {s−3 (0), s−2 (0.8), s−1 (0.2), s0 (0), s1 (0), s2 (0), s3 (0)}, = {s−3 (0), s−2 (0), s−1 (0), s0 (0), s1 (1), s2 (0), s3 (0)}, 6 . H (1, 3) = {s−3 (0), s−2 (1), s−1 (0), s0 (0), s1 (0), s2 (0), s3 (0)},

8.3 Application of CDOOC Clustering Method in Car-Sharing … .H

6 (2, 1)

.H

6 (2, 2)

137

= {s−3 (0), s−2 (0), s−1 (0), s0 (0.65), s1 (0), s2 (0.35), s3 (0)}, = {s−3 (0), s−2 (0), s−1 (0.2), s0 (0), s1 (0), s2 (0), s3 (0.8)}, 6 . H (2, 3) = {s−3 (0), s−2 (0), s−1 (0), s0 (1), s1 (0), s2 (0), s3 (0)}, 6 . H (3, 1) = {s−3 (0.1), s−2 (0), s−1 (0.2), s0 (0), s1 (0.3), s2 (0), s3 (0.4)}, 6 . H (3, 2) = {s−3 (0), s−2 (0), s−1 (0), s0 (0.3), s1 (0), s2 (0), s3 (0.7)}, 6 . H (3, 3) = {s−3 (0), s−2 (0), s−1 (0), s0 (0), s1 (0), s2 (0), s3 (1)}, 6 . H (4, 1) = {s−3 (0), s−2 (0), s−1 (0), s0 (0.6), s1 (0), s2 (0.4), s3 (0)}, 6 . H (4, 2) = {s−3 (0), s−2 (0), s−1 (0), s0 (0.1), s1 (0.65), s2 (0), s3 (0.35)}, 6 . H (4, 3) = {s−3 (0), s−2 (0), s−1 (0), s0 (0.8), s1 (0), s2 (0.2), s3 (0)}, 6 . H (5, 1) = {s−3 (0), s−2 (0.2), s−1 (0), s0 (0.25), s1 (0), s2 (0.55), s3 (0)}, 6 . H (5, 2) = {s−3 (0), s−2 (0.2), s−1 (0), s0 (0.25), s1 (0.55), s2 (0), s3 (0)}, 6 . H (5, 3) = {s−3 (0), s−2 (0), s−1 (0), s0 (0.6), s1 (0), s2 (0.3), s3 (0.1)}.

Step 3: Aggregate the individual opinions to obtain the final group opinion by using the PLWA operator (i.e., Eq. (5.3)). . H G (1, 1) = {s−3 (0.1358), s−2 (0.1576), s−1 (0.1755), s0 (0.0889), s1 (0.1709), s2 (0.1405), s3 (0.1307)}, . H G (1, 2) = {s−3 (0.0364), s−2 (0.0843), s−1 (0.2098), s0 (0.0746), s1 (0.4769), s2 (0.0439), s3 (0.0742)}, . H G (1, 3) = {s−3 (0.0911), s−2 (0.2417), s−1 (0.0842), s0 (0.1724), s1 (0.3242), s2 (0.0734), s3 (0.0131)}, . H G (2, 1) = {s−3 (0.0304), s−2 (0.0858), s−1 (0.0907), s0 (0.2421), s1 (0.1082), s2 (0.2173), s3 (0.2256)}, . H G (2, 2) = {s−3 (0), s−2 (0.068), s−1 (0.1199), s0 (0.184), s1 (0.2015), s2 (0.2824), s3 (0.1442)}, . H G (2, 3) = {s−3 (0.0711), s−2 (0.0912), s−1 (0.1552), s0 (0.3686), s1 (0.198), s2 (0.0251), s3 (0.0907)}, . H G (3, 1) = {s−3 (0.0608), s−2 (0.0953), s−1 (0.2175), s0 (0.0644), s1 (0.2421), s2 (0.0941), s3 (0.2259)}, . H G (3, 2) = {s−3 (0.0861), s−2 (0.1191), s−1 (0.0594), s0 (0.3814), s1 (0.1072), s2 (0.1068), s3 (0.14)}, . H G (3, 3) = {s−3 (0.0693), s−2 (0.0828), s−1 (0.2255), s0 (0.1837), s1 (0.2069), s2 (0.1492), s3 (0.0826)}, . H G (4, 1) = {s−3 (0.0128), s−2 (0.1466), s−1 (0.1658), s0 (0.1747), s1 (0.1114), s2 (0.3717), s3 (0.017)}, . H G (4, 2) = {s−3 (0.1456), s−2 (0.0633), s−1 (0.2245), s0 (0.1086), s1 (0.1955), s2 (0.0951), s3 (0.1674)}, . H G (4, 3) = {s−3 (0.048), s−2 (0.1268), s−1 (0.1259), s0 (0.3296), s1 (0.1762), s2 (0.1649), s3 (0.0285)}, . H G (5, 1) = {s−3 (0.1135), s−2 (0.1752), s−1 (0.0872), s0 (0.21), s1 (0.2163), s2 (0.1281), s3 (0.0695)}, . H G (5, 2) = {s−3 (0.1154), s−2 (0.1222), s−1 (0.089), s0 (0.1441), s1 (0.2435), s2 (0.0678), s3 (0.2325)}, . H G (5, 3) = {s−3 (0.0284), s−2 (0.1072), s−1 (0.1076), s0 (0.2146), s1 (0.0402), s2 (0.3531), s3 (0.1487)}.

Step 4: Calculate the overall attribute values . Z (xi )(i = 1, 2, . . . , 5) based on Eq. (5.3). . Z (x 1 ) = {s−3 (0.097), s−2 (0.1524), s−1 (0.1675), s0 (0.1013), s1 (0.2934), s2 (0.0981), s3 (0.0903)}, . Z (x 2 ) = {s−3 (0.0294), s−2 (0.0815), s−1 (0.1123), s0 (0.25), s1 (0.1542), s2 (0.1984), s3 (0.1742)}, . Z (x 3 ) = {s−3 (0.0701), s−2 (0.0999), s−1 (0.1717), s0 (0.1834), s1 (0.1946), s2 (0.1089), s3 (0.1715)}, . Z (x 4 ) = {s−3 (0.0597), s−2 (0.1176), s−1 (0.1755), s0 (0.1859), s1 (0.1496), s2 (0.2474), s3 (0.0644)}, . Z (x 5 ) = {s−3 (0.0971), s−2 (0.1457), s−1 (0.0918), s0 (0.1912), s1 (0.1893), s2 (0.155), s3 (0.1342)}.

Step 5: Based on the score function of PLTSs proposed by [4], the scores of the overall criterion values are calculated as: . E(z(x 1 )) = s−0.0031 , . E(z(x 2 )) = s0.7099 , . E(z(x 3 )) = s0.3451 , . E(z(x 4 )) = s0.2478 , . E(z(x 5 )) = s0.2276 . Therefore, the ranking is obtained as .x2 ≻ x3 ≻ x4 ≻ x5 ≻ x1 . That is, .x2 is the best alternative.

138

8 Practical Applications

References 1. Chen, J. Q., Ma, L. T., Wang, C., Zhang, H., & Ha, M. H. (2014). Comprehensive evaluation model for coal mine safety based on uncertain random variables. Safety Science, 68, 146–152. 2. Gonzalez, T. F. (1985). Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38, 293–306. 3. Liu, Y. J., Mao, S. J., Li, M., & Yao, J. M. (2007). Study of a comprehensive assessment method for coal mine safety based on a hierarchical grey analysis. Journal of China University of Mining & Technology, 17, 6–10. 4. Pang, Q., Wang, H., & Xu, Z. (2016). Probabilistic linguistic term sets in multi-attribute group decision making. Information Sciences, 369, 128–143. 5. Xu, Z., & Da, Q. (2005). A least deviation method to obtain a priority vector of a fuzzy preference relation. European Journal of Operational Research, 164(1), 206–216.

Chapter 9

Conclusions and Future Research Directions

Abstract This chapter summarizes the main findings and contributions of the latest research on SNLSDM presented by the authors. The limitations and future research directions of this topic are provided. Keywords Social network large-scale decision-making (SNLSDM) · Findings and contributions · Limitations · Application scenarios

9.1 Findings and Conclusions SNLSDM techniques are capable of modeling most of the current real-world decision-making scenarios and have therefore become an important research topic in economics, management, computer science and other fields. This book focuses on three critical challenges of SNLSDM problems, including structure-heterogeneous information fusion, clustering analysis with multiple measurement attributes, and consensus-reaching process considering social relationships. First of all, we propose the definition of SNLSDM, describe its main characteristics, and present the problem configuration. SNLSDM integrates the main features of LSDM and SNGDM. We argue that social relationships between DMs are as important as opinion/preference similarity in clustering and CRP. This statement is the hypothetical basis of the book’s proposal. To address the aggregation and distance measurement of structure-heterogeneous evaluation information, this book proposes a fusion method based on trust and behavior analysis. First, we analyze DMs’ selection behaviors regarding alternatives and attributes and classify them into three categories: empty, positive, and negative. Then, the distance measures between heterogeneous information belonging to different categories of selection behaviors are defined and guided by social relationships among DMs. Eventually, a complementary method is developed to fill in the unassigned positions. Clustering is an effective tool to reduce the dimensionality of large-scale decisionmakers and address the scalability challenges of SNLSDM. We provide two novel

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Z. Du and S. Yu, Social Network Large-Scale Decision-Making, Uncertainty and Operations Research, https://doi.org/10.1007/978-981-99-7794-9_9

139

140

9 Conclusions and Future Research Directions

clustering methods, both of which emphasize that opinion similarity and social relationships are important for clustering analysis. . A trust Cop-Kmeans algorithm is proposed, which is a semi-supervised clustering technique utilizing the prior knowledge of social relationships among DMs. Opinion/preference similarity is the most important measurement attribute. Social relationships are regarded as a constraint determine whether any two DMs can eventually be assigned to the same cluster. . A compatibility distance oriented off-center clustering method is designed that integrates opinion distance and social relationships. The most important characteristic of the method is that it combines two measurement attributes and specifies the upper and lower limits on the number of DMs in the cluster. In addition, we put forward a weight-determining method for clusters that integrates three indices: the size, cohesion, and overall trust degree of a cluster. The third topic is about consensus building in SNLSDM. This book explores the impact of trust loss originating from social relationships on the CRP and proposes two consensus-reaching models. . An improved MCC model that takes into account voluntary trust loss is proposed. The model builds on the traditional MCC architecture, and then quantifies the feedback coefficient and visualizes the adjustment direction by introducing explicit adjustment paths. In this way, the opinion/preference over-adjustment can be avoided and a high-trust cluster can voluntarily sacrifice its own trust in exchange for a reduction in its adjustment cost. . A punishment-driven consensus-reaching model for SNLSDM problems is developed. According to the differences in consensus degrees and trust degrees, four types of consensus scenarios are distinguished and the associated adjustment strategies are generated. For the cluster that belongs to low-consensus-and-hightrust case, it can voluntarily sacrifice some of its trust degree in exchange for the lower adjustment cost. It is also worthwhile to describe the application of proposed methods and models to a variety of real-world decision scenarios, such as coal mine safety assessment, social capital selection, and car-sharing provider selection.

9.2 Future Research Directions As the extension of LSDM on social network, SNLSDM is a novel and practical research topic. This book focuses on three basic issues in SNLSDM, including structure-heterogeneous information fusion, clustering analysis and consensus building, and provides corresponding solutions. We consider that the main research paths and directions of SNLSDM in the future are as follows.

9.2 Future Research Directions

141

. Seeking the third type of measurement attribute in SNLSDM on clustering analysis and proposing clustering algorithms with multiple measurement attributes. Opinion similarity and social relationship are considered as two important measurement attributes for clustering analysis in SNLSDM. In fact, there may be a third type of measurement attribute in an SNLSDM problem, such as affiliation between decision-makers (which may refer to one decision maker deferring to another one) Therefore, it is urgent and practically valuable to propose clustering algorithms that incorporate multiple measurement attributes. . Focusing on the evaluation of trust loss performance. Consensus costs in the CRP include individual adjustment costs, group adjustment costs, and individual trust loss costs. The research results show that individual trust loss leads to a decrease in the corresponding adjustment cost, but is likely to lead to an increase in group adjustment cost. Therefore, an evaluation criterion should be designed to measure trust loss performance. . Testing the efficiency and effectiveness of proposed clustering algorithms in real decision scenarios involving large amounts of decision data. The number of decision-makers involved in the numerical experiments and case studies in this book is twenty. However, real-world decision scenarios may contain hundreds or even thousands of decision-makers. In such scenarios, are the proposed clustering algorithms still effective and how efficient is the processing? Continuous improvement of clustering algorithms in dealing with real data is also an important research direction. In addition, we would like to expand the application of proposed clustering algorithms and consensus models, such as route selection in tourism management, identification and early warning of financial fraud, and public-participation decisionmaking represented by infrastructure construction. In addition, we would like to expand the applications of the proposed clustering algorithms and consensus models, such as route selection in tourism management, identification and early warning of financial fraud, public-participation decision-making represented by infrastructure construction, and physician-patient group detection in online healthcare communities.