Decision Support System. Tools and Techniques 9781032309927, 9781032310220, 9781003307655


261 46 26MB

English Pages [394] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Half Title
Title Page
Copyright Page
Dedication
Table of Contents
About the Author
Preface
Chapter 1 Introduction
1.1 Introduction
1.2 Classification of Decision Support System
1.3 Decision Support Tools
1.4 Overall Method of Decision-Making
1.5 Brief Introduction to Each Chapter in This Book
Chapter 2 Decision Tree
2.1 Basic Concept
2.2 Algorithms for Construction of Decision Tree
2.2.1 ID3
2.2.1.1 Temperature
2.2.1.2 Humidity
2.2.1.3 Distance
2.2.1.4 Expense
2.2.1.5 Outlook
2.2.1.6 Temperature
2.2.1.7 Humidity
2.2.1.8 Distance
2.2.1.9 Expense
2.2.1.10 Outlook
2.2.2 C4.5
2.3 Time Complexity of Decision Tree
2.4 Various Applications of Decision Tree
2.5 Conclusion
Reference
Chapter 3 Decision Table
3.1 Basic Concept
3.1.1 Limited Entry Decision Table
3.1.2 Extended Entry Decision Table
3.1.3 Mixed Entry Decision Table
3.2 Approaches to Handle Inconsistency for Decision Tables
3.3 Decision Table Languages
3.3.1 Base Language
3.3.2 Rule Selection
3.3.3 Outer Language
3.4 Different Modifications of Decision Table and Latest Trend
3.5 Applications of Different Techniques on Decision Tables
3.6 Conclusion
References
Chapter 4 Predicate Logic
4.1 Introduction
4.2 Latest Research Studies on Predicate and Propositional Logic
4.3 Conclusion
References
Chapter 5 Fuzzy Theory and Fuzzy Logic
5.1 Basic Concepts
5.2 Fuzzification and Defuzzification
5.3 Some Advanced Fuzzy Sets
5.4 Conclusion
References
Chapter 6 Network Tools
6.1 Basic Concepts
6.2 Gantt Chart
6.3 Milestone Chart
6.4 Graphical Evaluation and Review Technique
6.5 Modifications of Traditional Tools
6.6 Conclusion
References
Chapter 7 Petri Net
7.1 Introduction
7.2 Different Types of Petri Nets
7.2.1 Autonomous Petri Net
7.2.2 State Graph
7.2.3 Event Graph
7.2.4 Conflict-Free Petri Net
7.2.5 Free-Choice Petri Net
7.2.6 Simple Petri Net
7.2.7 Pure Petri Net
7.2.8 Generalized Petri Net
7.2.9 Capacitated Petri Nets
7.2.10 Bounded Petri Net
7.2.11 Safe Petri Net
7.2.12 Colored Petri Net
7.2.13 Deadlock
7.3 Continuous and Hybrid Petri Nets
7.4 Basic Modeling Construct of Petri Net
7.5 Modifications of Different Types of Petri Nets and Latest Research Trends
7.6 Conclusion
References
Chapter 8 Markov Chain
8.1 Introduction
8.2 Transition Probability
8.2.1 Calculation of Transition Probability from the Current State
8.2.2 Calculation of Transition Probability from the Current State and the Previous State
8.2.3 Calculation of Multi-Step Transition Probability
8.3 Classification of Markov Chain
8.4 Some Other Miscellaneous Aspects
8.4.1 Canonical Form of Transition Matrix
8.4.2 Steady-State Probabilities for a Regular Markov Chain
8.5 Variations and Modifications of Markov Chains
8.6 Markov Chain Monte Carlo
8.6.1 Gibb's Sampling
8.7 Applications of Markov Chain
8.8 Conclusion
Reference
Chapter 9 Case-Based Reasoning
9.1 Introduction
9.2 Basic Elements and Basic Method
9.2.1 Similarity and Retrieval
9.2.2 CBR Tools
9.2.3 Case Presentation
9.3 Advanced Methods of CBR
9.4 Applications of Case-Based Reasoning and Latest Research
9.5 Conclusion
References
Chapter 10 Multi-Criteria Decision Analysis Techniques
10.1 Basic Concept
10. 2 Benchmark MCDA Techniques
10.2.1 TOPSIS
10.2.2 PROMETHEE
10.2.3 AHP
10.2.4 ANP
10.2.5 MAUT
10.2.6 MACBETH
10.2.7 MOORA
10.2.8 COPRAS
10.2.9 WASPAS
10.2.10 MABAC
10.3 Comparison Among MCDA Techniques
10.3.1 Theoretical Comparison
10.3.2 Rank Correlation Methods
10.3.3 A Newly Proposed Method
10.4 Modification of MCDA Techniques
10.5 Conclusion
References
Chapter 11 Some Other Tools
11.1 Introduction
11.2 Linear Programming
11.2.1 Simplex Method
11.2.1.1 Iteration 1
11.2.1.2 Iteration 2
11.2.1.3 Iteration 3
11.2.2 Two-Phase Method
11.2.2.1 Phase – I
11.2.2.2 First Iteration
11.2.2.3 Iteration 2
11.2.3 Big-M Method
11.2.3.1 First Iteration
11.2.3.2 Second Iteration
11.2.3.3 LPP with Unbounded Solution
11.2.4 Dual Simplex Method
11.2.5 Linear Fractional Programming
11.3 Simulation
11.3.1 Linear Congruential Generator (LCG)
11.3.2 Multiplicative Congruential Generator (MCG)
11.4 Big Data Analytics
11.5 Internet of Things
11.6 Conclusion
References
Chapter 12 Spatial Decision Support System
12.1 Introduction
12.2 Components of SDSS
12.3 SDSS Software
12.4 GRASS GIS Software
12.5 Conclusion
References
Chapter 13 Data Warehousing and Data Mining
13.1 Introduction
13.2 Data Warehouse
13.3 Data Mining
13.3.1 Process of Data Mining
13.3.2 Predictive Modeling
13.3.2.1 Linear Regression
13.3.3 Multiple Linear Regression
13.3.3.1 Assumptions for MLR
13.3.3.2 Linearity
13.3.3.3 Homoscedasticity
13.3.3.4 Uncorrelated Error Terms
13.3.3.5 Estimation of Model Parameters β
13.3.4 Quadratic Trend
13.3.4.1 Logarithmic Trend
13.3.4.2 Association Rules
13.3.4.3 Basic Concept
13.3.4.4 Support and Confidence
13.3.4.5 Association Rule Mining
13.3.4.6 Lift Measure
13.3.4.7 Sequence Rules
13.3.4.8 Segmentation
13.3.4.9 K-Means Clustering
13.3.4.10 Self-Organizing Maps
13.3.4.11 Database Segmentation
13.3.4.12 Clustering for Database Segmentation
13.3.4.13 Cluster Analysis: A Process Model (Figure 13.18)
13.4 Conclusion
References
Chapter 14 Intelligent Decision Support System
14.1 Introduction
14.2 Enterprise Information System
14.3 Knowledge Management
14.3.1 Concept Map
14.3.2 Semantic Network
14.4 Artificial Intelligence
14.4.1 Propositional Logic
14.4.2 Nature–Based Optimization Techniques
14.4.2.1 Genetic Algorithm
14.4.2.2 Particle Swarm Optimization
14.4.2.3 Ant Colony Optimization (ACO)
14.4.2.4 Artificial Immune Algorithm (AIA)
14.4.2.5 Differential Evolution (DE)
14.4.2.6 Simulated Annealing
14.4.2.7 Tabu Search
14.4.2.8 Gene Expression Programming
14.4.2.9 Frog Leaping Algorithm
14.4.2.10 Honey Bee Mating Algorithm (HBMA)
14.4.2.11 Bacteria Foraging Algorithm (BFA)
14.4.2.12 Cultural Algorithm (CA)
14.4.2.13 Firefly Algorithm (FA)
14.4.2.14 Cuckoo Search (CS)
14.4.2.15 Gravitational Search Algorithm (GSA)
14.4.2.16 Charged System Search
14.4.2.17 Intelligent Water Drops Algorithm
14.4.2.18 Bat Algorithm (BA)
14.4.2.19 Black Hole Algorithm (BHA)
14.4.2.20 Black Widow Optimization (BWO) Algorithm
14.4.2.21 Butterfly Optimization Algorithm (BOA)
14.4.2.22 Crow Search Algorithm (CSA)
14.4.2.23 Deer Hunting Optimization (DHO) Algorithm
14.4.2.24 Dragonfly Algorithm (DA)
14.4.2.25 Emperor Penguin Optimization (EPO)
14.4.2.26 Flower Pollination Algorithm (FPA)
14.4.2.27 Glowworm Swarm Based Optimization
14.4.2.28 Grasshopper Optimization Algorithm (GOA)
14.4.2.29 Grey Wolf Optimization (GWO)
14.4.2.30 Krill Herd Algorithm (KHA)
14.4.2.31 Lion Optimization Algorithm
14.4.2.32 Migratory Birds Optimization (MBO)
14.4.2.33 Moth-Flame Optimization Algorithm
14.4.2.34 Mouth-Brooding Fish Algorithm
14.4.2.35 Polar Bear Optimization Algorithm
14.4.2.36 Whale Optimization Algorithm (WOA)
14.4.2.37 Sea Lion Optimization Algorithm (SLOA)
14.4.2.38 Tarantula Mating-Based Strategy (TMS)
14.4.3 Some Latest Tools for Recent Applications of Artificial Intelligence
14.4.3.1 Cloud Computing
14.4.3.2 Big Data
14.5 Conclusion
References
Chapter 15 DSS Software
15.1 Introduction
15.2 Software Overview for DT
15.2.1 KNIME
15.3 Software Overview for Networking Techniques
15.4 Software Overview for Markov Process and Markov Chain
15.5 Software Overview for Regression
15.6 Software Overview for LP
15.7 Software Overview for Simulation
15.7.1 Create
15.7.2 Process
15.7.3 Decide
15.7.4 Dispose
15.7.5 Assign
15.8 Software Overview for Data Warehouse
15.9 Software Overview for Other Common Software
15.9.1 Matlab
15.9.2 C#.net
15.10 Conclusion
Reference
Chapter 16 Future of Decision Support System
16.1 Introduction
16.2 Conclusion
References
Index
Recommend Papers

Decision Support System. Tools and Techniques
 9781032309927, 9781032310220, 9781003307655

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Decision Support System This book presents different tools and techniques used for Decision Support Systems (DSS), ­including decision tree and table, and their modifications, multi-criteria decision analysis techniques, network tools of decision support, and various case-based reasoning methods supported by examples and case studies. Latest developments for each of the techniques have been discussed separately, and possible future research areas are duly identified as intelligent and spatial DSS. Features: • Discusses all the major tools and techniques for Decision Support System supported by examples. • Explains techniques considering their deterministic and stochastic aspects. • Covers network tools including GERT and Q-GERT. • Explains the application of both probability and fuzzy orientation in the pertinent techniques. • Includes a number of relevant case studies along with a dedicated chapter on software. This book is aimed at researchers and graduate students in information systems, data analytics, operation research, including management and computer science areas.

Decision Support System Tools and Techniques

Susmita Bandyopadhyay

First edition published 2023 by CRC Press 6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742 and by CRC Press 4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN CRC Press is an imprint of Taylor & Francis Group, LLC © 2023 Susmita Bandyopadhyay Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and p ­ ublishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to ­copyright ­holders if permission to publish in this form has not been obtained. If any copyright material has not been ­acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, ­including ­photocopying, microfilming, and recording, or in any information storage or retrieval system, without written ­permission from the publishers. For permission to photocopy or use material electronically from this work, access www.copyright.com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are not available on CCC please contact [email protected] Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for identification and explanation without intent to infringe. ISBN: 978-1-032-30992-7 (hbk) ISBN: 978-1-032-31022-0 (pbk) ISBN: 978-1-003-30765-5 (ebk) DOI: 10.1201/9781003307655 Typeset in Times by codeMantra

This book is dedicated to my mother and my teacher

Contents About the Author............................................................................................................................ xiii Preface.............................................................................................................................................. xv Chapter 1 Introduction...................................................................................................................1 1.1 Introduction........................................................................................................1 1.2 Classification of Decision Support System.........................................................1 1.3 Decision Support Tools......................................................................................2 1.4 Overall Method of Decision-Making.................................................................3 1.5 Brief Introduction to Each Chapter in This Book..............................................4 Chapter 2 Decision Tree................................................................................................................7 2.1 2.2

Basic Concept.....................................................................................................7 Algorithms for Construction of Decision Tree...................................................9 2.2.1 ID3....................................................................................................... 11 2.2.1.1 Temperature....................................................................... 11 2.2.1.2 Humidity........................................................................... 12 2.2.1.3 Distance............................................................................. 12 2.2.1.4 Expense............................................................................. 13 2.2.1.5 Outlook.............................................................................. 14 2.2.1.6 Temperature....................................................................... 15 2.2.1.7 Humidity........................................................................... 16 2.2.1.8 Distance............................................................................. 16 2.2.1.9 Expense............................................................................. 17 2.2.1.10 Outlook.............................................................................. 17 2.2.2 C4.5..................................................................................................... 19 2.3 Time Complexity of Decision Tree..................................................................20 2.4 Various Applications of Decision Tree............................................................. 21 2.5 Conclusion........................................................................................................ 21 References................................................................................................................... 21 Chapter 3 Decision Table............................................................................................................. 23 3.1

Basic Concept................................................................................................... 23 3.1.1 Limited Entry Decision Table.............................................................26 3.1.2 Extended Entry Decision Table........................................................... 27 3.1.3 Mixed Entry Decision Table...............................................................28 3.2 Approaches to Handle Inconsistency for Decision Tables............................... 29 3.3 Decision Table Languages................................................................................ 32 3.3.1 Base Language.................................................................................... 32 3.3.2 Rule Selection...................................................................................... 33 3.3.3 Outer Language................................................................................... 33 3.4 Different Modifications of Decision Table and Latest Trend...........................34 3.5 Applications of Different Techniques on Decision Tables...............................34 3.6 Conclusion........................................................................................................ 35 References................................................................................................................... 35 vii

viii

Contents

Chapter 4 Predicate Logic........................................................................................................... 37 4.1 Introduction...................................................................................................... 37 4.2 Latest Research Studies on Predicate and Propositional Logic....................... 50 4.3 Conclusion........................................................................................................ 50 References................................................................................................................... 51 Chapter 5 Fuzzy Theory and Fuzzy Logic.................................................................................. 53 5.1 Basic Concepts................................................................................................. 53 5.2 Fuzzification and Defuzzification.....................................................................60 5.3 Some Advanced Fuzzy Sets............................................................................. 61 5.4 Conclusion........................................................................................................ 62 References................................................................................................................... 62 Chapter 6 Network Tools............................................................................................................. 63 6.1 Basic Concepts................................................................................................. 63 6.2 Gantt Chart....................................................................................................... 71 6.3 Milestone Chart................................................................................................ 74 6.4 Graphical Evaluation and Review Technique.................................................. 76 6.5 Modifications of Traditional Tools...................................................................84 6.6 Conclusion........................................................................................................84 References...................................................................................................................84 Chapter 7 Petri Net....................................................................................................................... 87 7.1 7.2

Introduction...................................................................................................... 87 Different Types of Petri Nets............................................................................ 89 7.2.1 Autonomous Petri Net.........................................................................90 7.2.2 State Graph..........................................................................................90 7.2.3 Event Graph......................................................................................... 91 7.2.4 Conflict-Free Petri Net........................................................................ 91 7.2.5 Free-Choice Petri Net.......................................................................... 91 7.2.6 Simple Petri Net..................................................................................92 7.2.7 Pure Petri Net......................................................................................92 7.2.8 Generalized Petri Net.......................................................................... 93 7.2.9 Capacitated Petri Nets.........................................................................94 7.2.10 Bounded Petri Net............................................................................... 95 7.2.11 Safe Petri Net...................................................................................... 95 7.2.12 Colored Petri Net.................................................................................96 7.2.13 Deadlock..............................................................................................97 7.3 Continuous and Hybrid Petri Nets....................................................................97 7.4 Basic Modeling Construct of Petri Net............................................................99 7.5 Modifications of Different Types of Petri Nets and Latest Research Trends............................................................................................................. 104 7.6 Conclusion...................................................................................................... 105 References................................................................................................................. 106

ix

Contents

Chapter 8 Markov Chain............................................................................................................ 109 8.1 8.2

Introduction.................................................................................................... 109 Transition Probability..................................................................................... 113 8.2.1 Calculation of Transition Probability from the Current State........... 113 8.2.2 Calculation of Transition Probability from the Current State and the Previous State....................................................................... 114 8.2.3 Calculation of Multi-Step Transition Probability.............................. 114 8.3 Classification of Markov Chain...................................................................... 117 8.4 Some Other Miscellaneous Aspects............................................................... 119 8.4.1 Canonical Form of Transition Matrix............................................... 119 8.4.2 Steady-State Probabilities for a Regular Markov Chain................... 119 8.5 Variations and Modifications of Markov Chains........................................... 121 8.6 Markov Chain Monte Carlo........................................................................... 122 8.6.1 Gibb’s Sampling................................................................................ 122 8.7 Applications of Markov Chain....................................................................... 122 8.8 Conclusion...................................................................................................... 123 References................................................................................................................. 123

Chapter 9 Case-Based Reasoning.............................................................................................. 125 9.1 9.2

Introduction.................................................................................................... 125 Basic Elements and Basic Method................................................................. 127 9.2.1 Similarity and Retrieval.................................................................... 128 9.2.2 CBR Tools......................................................................................... 130 9.2.3 Case Presentation.............................................................................. 131 9.3 Advanced Methods of CBR............................................................................ 133 9.4 Applications of Case-Based Reasoning and Latest Research........................ 138 9.5 Conclusion...................................................................................................... 138 References................................................................................................................. 140 Chapter 10 Multi-criteria Decision Analysis Techniques............................................................ 143 10.1 Basic Concept................................................................................................. 143 10. 2 Benchmark MCDA Techniques...................................................................... 144 10.2.1 TOPSIS.............................................................................................. 145 10.2.2 PROMETHEE................................................................................... 146 10.2.3 AHP................................................................................................... 148 10.2.4 ANP................................................................................................... 151 10.2.5 MAUT............................................................................................... 152 10.2.6 MACBETH....................................................................................... 154 10.2.7 MOORA............................................................................................ 156 10.2.8 COPRAS........................................................................................... 157 10.2.9 WASPAS........................................................................................... 158 10.2.10 MABAC............................................................................................ 158 10.3 Comparison among MCDA Techniques........................................................ 159 10.3.1 Theoretical Comparison.................................................................... 160 10.3.2 Rank Correlation Methods................................................................ 160 10.3.3 A Newly Proposed Method............................................................... 162

x

Contents

10.4 Modification of MCDA Techniques............................................................... 165 10.5 Conclusion...................................................................................................... 166 References................................................................................................................. 166 Chapter 11 Some Other Tools...................................................................................................... 167 11.1 Introduction.................................................................................................... 167 11.2 Linear Programming...................................................................................... 167 11.2.1 Simplex Method................................................................................ 168 11.2.1.1 Iteration 1........................................................................ 170 11.2.1.2 Iteration 2........................................................................ 173 11.2.1.3 Iteration 3........................................................................ 176 11.2.2 Two-Phase Method............................................................................ 179 11.2.2.1 Phase – I.......................................................................... 180 11.2.2.2 First Iteration................................................................... 180 11.2.2.3 Iteration 2........................................................................ 183 11.2.3 Big-M Method................................................................................... 186 11.2.3.1 First Iteration................................................................... 188 11.2.3.2 Second Iteration............................................................... 190 11.2.3.3 LPP with Unbounded Solution........................................ 193 11.2.4 Dual Simplex Method....................................................................... 196 11.2.5 Linear Fractional Programming.......................................................200 11.3 Simulation.......................................................................................................205 11.3.1 Linear Congruential Generator (LCG).............................................209 11.3.2 Multiplicative Congruential Generator (MCG)................................. 210 11.4 Big Data Analytics......................................................................................... 211 11.5 Internet of Things........................................................................................... 214 11.6 Conclusion...................................................................................................... 215 References................................................................................................................. 215 Chapter 12 Spatial Decision Support System.............................................................................. 217 12.1 Introduction.................................................................................................... 217 12.2 Components of SDSS..................................................................................... 219 12.3 SDSS Software............................................................................................... 220 12.4 GRASS GIS Software.................................................................................... 221 12.5 Conclusion...................................................................................................... 225 References................................................................................................................. 225 Chapter 13 Data Warehousing and Data Mining......................................................................... 227 13.1 Introduction.................................................................................................... 227 13.2 Data Warehouse.............................................................................................. 227 13.3 Data Mining................................................................................................... 229 13.3.1 Process of Data Mining..................................................................... 231 13.3.2 Predictive Modeling.......................................................................... 231 13.3.2.1 Linear Regression............................................................ 232 13.3.3 Multiple Linear Regression............................................................... 234 13.3.3.1 Assumptions for MLR..................................................... 237 13.3.3.2 Linearity.......................................................................... 237 13.3.3.3 Homoscedasticity............................................................ 237

Contents

xi

13.3.3.4 Uncorrelated Error Terms............................................... 238 13.3.3.5 Estimation of Model Parameters β.................................. 238 13.3.4 Quadratic Trend................................................................................. 243 13.3.4.1 Logarithmic Trend.......................................................... 245 13.3.4.2 Association Rules............................................................246 13.3.4.3 Basic Concept..................................................................246 13.3.4.4 Support and Confidence.................................................. 247 13.3.4.5 Association Rule Mining................................................. 247 13.3.4.6 Lift Measure....................................................................248 13.3.4.7 Sequence Rules...............................................................248 13.3.4.8 Segmentation................................................................... 249 13.3.4.9 K-Means Clustering........................................................ 253 13.3.4.10 Self-Organizing Maps..................................................... 253 13.3.4.11 Database Segmentation................................................... 254 13.3.4.12 Clustering for Database Segmentation............................ 254 13.3.4.13 Cluster Analysis: A Process Model (Figure 13.18)......... 254 13.4 Conclusion...................................................................................................... 261 References................................................................................................................. 261 Chapter 14 Intelligent Decision Support System......................................................................... 263 14.1 Introduction.................................................................................................... 263 14.2 Enterprise Information System...................................................................... 263 14.3 Knowledge Management................................................................................264 14.3.1 Concept Map..................................................................................... 268 14.3.2 Semantic Network............................................................................. 269 14.4 Artificial Intelligence..................................................................................... 270 14.4.1 Propositional Logic........................................................................... 271 14.4.2 Nature–Based Optimization Techniques.......................................... 275 14.4.2.1 Genetic Algorithm........................................................... 276 14.4.2.2 Particle Swarm Optimization.......................................... 277 14.4.2.3 Ant Colony Optimization (ACO).................................... 277 14.4.2.4 Artificial Immune Algorithm (AIA)............................... 278 14.4.2.5 Differential Evolution (DE)............................................. 278 14.4.2.6 Simulated Annealing....................................................... 278 14.4.2.7 Tabu Search..................................................................... 279 14.4.2.8 Gene Expression Programming......................................280 14.4.2.9 Frog Leaping Algorithm.................................................280 14.4.2.10 Honey Bee Mating Algorithm (HBMA)......................... 281 14.4.2.11 Bacteria Foraging Algorithm (BFA)............................... 281 14.4.2.12 Cultural Algorithm (CA)................................................. 281 14.4.2.13 Firefly Algorithm (FA).................................................... 282 14.4.2.14 Cuckoo Search (CS)........................................................ 282 14.4.2.15 Gravitational Search Algorithm (GSA)........................... 283 14.4.2.16 Charged System Search................................................... 283 14.4.2.17 Intelligent Water Drops Algorithm.................................284 14.4.2.18 Bat Algorithm (BA).........................................................284 14.4.2.19 Black Hole Algorithm (BHA)......................................... 285 14.4.2.20 Black Widow Optimization (BWO) Algorithm.............. 285 14.4.2.21 Butterfly Optimization Algorithm (BOA)....................... 286 14.4.2.22 Crow Search Algorithm (CSA)....................................... 286

xii

Contents

14.4.2.23 Deer Hunting Optimization (DHO) Algorithm............... 287 14.4.2.24 Dragonfly Algorithm (DA).............................................. 287 14.4.2.25 Emperor Penguin Optimization (EPO)........................... 288 14.4.2.26 Flower Pollination Algorithm (FPA)............................... 288 14.4.2.27 Glowworm Swarm Based Optimization......................... 289 14.4.2.28 Grasshopper Optimization Algorithm (GOA)................ 289 14.4.2.29 Grey Wolf Optimization (GWO)..................................... 290 14.4.2.30 Krill Herd Algorithm (KHA)..........................................290 14.4.2.31 Lion Optimization Algorithm......................................... 291 14.4.2.32 Migratory Birds Optimization (MBO)............................ 291 14.4.2.33 Moth-Flame Optimization Algorithm............................ 292 14.4.2.34 Mouth-Brooding Fish Algorithm.................................... 292 14.4.2.35 Polar Bear Optimization Algorithm................................ 293 14.4.2.36 Whale Optimization Algorithm (WOA)......................... 293 14.4.2.37 Sea Lion Optimization Algorithm (SLOA)..................... 294 14.4.2.38 Tarantula Mating-Based Strategy (TMS)........................ 294 14.4.3 Some Latest Tools for Recent Applications of Artificial Intelligence........................................................................................ 295 14.4.3.1 Cloud Computing............................................................ 295 14.4.3.2 Big Data........................................................................... 299 14.5 Conclusion...................................................................................................... 301 References.................................................................................................................302 Chapter 15 DSS Software............................................................................................................ 305 15.1 I ntroduction.................................................................................................... 305 15.2 S  oftware Overview for DT............................................................................. 305 15.2.1 KNIME.............................................................................................307 15.3 Software Overview for Networking Techniques............................................ 312 15.4 Software Overview for Markov Process and Markov Chain......................... 314 15.5 Software Overview for Regression................................................................. 316 15.6 Software Overview for LP.............................................................................. 319 15.7 Software Overview for Simulation................................................................. 328 15.7.1 Create................................................................................................ 331 15.7.2 Process............................................................................................... 331 15.7.3 Decide................................................................................................ 333 15.7.4 Dispose.............................................................................................. 334 15.7.5 Assign................................................................................................ 334 15.8 Software Overview for Data Warehouse........................................................ 335 15.9 Software Overview for Other Common Software.......................................... 335 15.9.1 Matlab................................................................................................ 335 15.9.2 C#.net................................................................................................. 361 15.10 Conclusion...................................................................................................... 366 Reference................................................................................................................... 366 Chapter 16 Future of Decision Support System.......................................................................... 367 16.1 Introduction.................................................................................................... 367 16.2 Conclusion...................................................................................................... 369 References................................................................................................................. 369 Index............................................................................................................................................... 371

About the Author Susmita Bandyopadhyay  is currently associated with the Department of Business Administration of The University of Burdwan, West Bengal, India. She has done Ph.D. in Engineering, M.Tech. in Industrial Engineering and Management, MBA in Systems Management, MCA, and PG diplomas. She is the author of three other books, published under CRC Press and IGI Global. She is also an author of three book chapters published in international book series; 23 research papers in different international and a few national journals of repute; 26 research papers published in different referred international conference proceedings. She is a reviewer of a total of 27 international journals belonging to databases like Elsevier and Springer. She is also an editorial assistant of an international journal. She is the best paper award winner in a few referred international conferences. She is a life member of Operational Research Society of India and a senior member of IEEE.

xiii

Preface Decision Support System (DSS) is supposed to assist the decision making for the managers. DSS is applicable to almost all levels of management. In practical fields, DSS is applied through different types of tools instead of taking the decisions in manual or any other way. However, problems in practical field are simple or complex or both. Therefore, without the use of proper tools and techniques, it is very difficult to take decisions in complex situations. There lies the relevance of emphasizing on the tools and techniques for DSS, in particular. This book aims at describing different tools and techniques in order to assist the decision making in all levels of management. There are various types of problems and therefore, variety of decisions to be taken – for example, allocation problem, classification problem, prediction problem, problems with huge amount of data (especially for present and future), network-related problems, complex problems with uncertainty, and so on. These problems can be solved using different techniques. For example, for allocation problems, Linear Programming (LP) algorithms can be applied; for prediction problem, regression of different types depending on the type of problem can be applied; for classification problem, decision tree or some other techniques are applicable; problems with huge amount of data can be handled by data warehousing techniques; for network-related problems, networking methods or methods like Petri net can be applied. Thus, different kinds of problems demand different kinds of analytical techniques. This book focuses on some very essential tools and techniques that can be applied in order to assist the decision-making process. Some of the techniques have been illustrated briefly, and some other techniques have been illustrated in details. Each technique has been explained with sufficient example or case study. The users are expected to get a good idea and learning for the tools and techniques as presented in this book.

xv

1

Introduction

1.1 INTRODUCTION A system is a collection of inter-related components. A system may be composed of different subsystems. If these inter-related components work together in order to collect, process, store, and use the information to support decision-making, coordination, control, analysis, and visualization purpose, then it is called an information system. The information system is the focus of the decision support system (DSS). Decision support is a necessity because of the following reasons: • Globalization: Managers need to take decisions in global marketplace because of ­globalization. Competition is the main issue that makes decision-making difficult. But the decisions must be made in order to gain competitive advantage. • Transformation: Knowledge and information are strengths in today’s world. Productivity depends on knowledge and information. Competition is also based on time since the product that is now demanding may not continue to be demanding after some time. Therefore, the organizations need to transform their processes and the components in the system in order to gain competitive advantage and survive. Application of DSS, therefore, depends on the type of organizations and system or problem on which decision is to be taken. The following section, therefore, focuses on the classification of DSS.

1.2  CLASSIFICATION OF DECISION SUPPORT SYSTEM DSS can be classified based on the drive for DSS. Thus, DSS can be classified into the following DSSs. • Communication-Driven DSS This type of DSS is based on the communication system through different meetings among the decision-makers. The most common technology to assist DSS of this kind is the use of web or client–server technology. Examples include meeting through e-conferencing, chat messaging, video chatting, and online collaboration. • Data-Driven DSS Data-driven DSS is concerned with database or data warehouse. It can pose specific queries to the database or data warehouse in order to receive response from it. In many cases, it is deployed through a centralized system or client–server link or via the web. • Document-Driven DSS This is a very common form of DSS. This particular DSS may be applied to search in the web for specific information or specific page and find the document(s) with specific search key or specific item or text. This can also be set up using web or client–server technology. • Knowledge-Driven DSS This type of DSS or “knowledgebase” is especially used for the users inside an organization. It can also include other users (for example, customers) connected to the organization in some way. It is basically used to provide advice to the managers or to choose products or services. • Model-Driven DSS This type of DSS is based on some developed model. These are complex systems that are especially used for analyzing decisions or to choose from among different alternatives. DOI: 10.1201/9781003307655-1

1

2

Decision Support System

These models may be used by managers or staff members or the business or the people who interact with the business. However, DSS systems can be categorized into different ways. Different authors suggest different classifications, such as: • • • • •

Classification of DSS systems based on user level, Classification of DSS systems based on conceptual level, Classification of DSS systems based on system level, Classification of DSS systems based on way of realization Classification of DSS systems based on concrete realization.

DSS systems, according to way of realization, split into the following: • • • • • • • • • • •

EDSS (Expert Decision Support System), KB-DSS (Knowledge-Based DSS) or IDSS (Intelligent DSS), GDSS (Group Decision Support System), MADSS (Multi-Attribute Decision Support System), MCDSS (Multi-Criteria Decision Support System), MDSS (Multi-Participant Decision Support System), NSS (Negotiation Support System), ODSS (Organizational Decision Support System), PDSS (Planning Decision Support System), TDSS (Team Decision Support System), WB-DSS (Web-Based Decision Support System)

Concrete realized DSS systems, according to the kind of services provided, for wider geographical area are for example: • • • • • • • • •

CDSS (Consumer Decision Support System), EDSS (Environmental Decision Support System), GSDSS (Group Spatial Decision Support System), IEDSS (Intelligent Environmental Decision Support System), ADSS (Land Allocation Decision Support System), MC-DSS (Multi-Criteria Decision Support System), SDSS (Spatial Decision Support System), TDSS (Tactical Decision Support System), WebSDDS (Web-Based Spatial Decision Support System) and etc.

Having discussed on different types of decision support system, the following section discusses different types of decision support tools.

1.3  DECISION SUPPORT TOOLS In reality, if a tool supports decision-making then that tool proves to be a decision support tool. Therefore, a large variety of tools can be regarded as the decision support tools. Some examples of decision support tools that have not been discussed in this book are enlisted below: • Brainstorming • Customer surveys and interviews • Quality Function Deployment (QFD)

3

Introduction

• • • • • • • • • • • • • • •

Statistical Process Control (SPC) Porter’s Generic Competitive Strategy SWOT analysis Capability and maturity analysis Boston Plot Balanced Scorecard Robustness Analysis Cost-Benefit Analysis Gap analysis Scatter plots Affinity charting Total Quality Management tools Pareto Principle Histogram Bar charts

And some more. However, this book has presented a significant number of benchmark tools and techniques that are regarded as decision support tools. These are also enlisted below. • • • • • • • • • • • • • • • •

Decision Tree and its variety Decision Table Predicate Logic Fuzzy Logic Networking related tools and techniques Petri Net Markov Chains Case-Based Reasoning Multi-Criteria Decision Analysis techniques Spatial decision support system Data warehousing Data mining Linear Programming Simulation Big Data Analytics Internet of Things

1.4  OVERALL METHOD OF DECISION-MAKING The overall decision-making process can be depicted in Figure 1.1. Figure 1.1 shows the method that is applicable to any project at hand. Figure 1.1 does not show any feedback although feedbacks Define decision problem

Determine criteria

Determine requirements

Select a method

FIGURE 1.1  Method of decision-making.

Establish Objectives

Evaluate alternatives

Generate alternatives

Validate solutions

Implement the solution

4

Decision Support System

are present from every step to any of the previous steps. These feedbacks are not shown in order to avoid clumsiness. The first step is to define the problem for which the decision is to be taken. Next step determines the requirements for solving the problem and in order to take decision. The requirements may clarify and/or modify the objective, and this is the third step. Then, a number of alternate ways to solve the problem is to be generated (fourth step). The criteria based on which the decision is to be taken for the alternative are also defined next (fifth step). A method is then selected so that each of the alternatives can be analyzed and evaluated (sixth step) followed by the evaluation of each objective (seventh step). An alternative is then chosen and is verified so as to determine whether the choice of the alternative is right (eighth step). Finally, the solution is implemented. Although these are the simple steps, but at any step, problems can be encountered because of errors in any of the previous steps. If that is the issue, then the step in which the defect is detected is executed again. Decision-making in the real world involves uncertainty in different variables since practical problems are full of uncertainty. These uncertainties are handled commonly, by means of either probabilities or Fuzzy Theory. There are other methods as well such as Rough Theory, Possibilistic Theory, and Fuzzy Probabilistic Theory. These theories are not being discussed in details since each of these techniques needs separate larger span for expressing themselves, although a separate chapter on Fuzzy Theory has been dedicated in this book since Fuzzy Theory is the most frequently practiced and applied technique to deal with uncertainty, as evident from the existing literature. The following section provides a brief introduction to each of the chapters in this book.

1.5  BRIEF INTRODUCTION TO EACH CHAPTER IN THIS BOOK Chapter 2 discusses various aspects of decision tree as decision tree is a very common decision support tool. The basic decision tree is defined through some examples. In addition to the ordinary decision tree, different algorithms have been mentioned that can be applied to draw decision tree from practical set of data in which one of the variables is regarded as target or output variable. Among these algorithms, ID3 and C4.5 have been depicted with suitable numerical examples and case. Chapter 3 introduces the idea of decision table. In addition to the description of the basic structure of decision table, different types of decision tables have been defined with proper example for each. Next, the approaches to handle inconsistency in decision table have been discussed followed by different decision table languages. Ultimately, the latest research trend for decision table has been presented based on the existing literature. Chapter 4 presents another tool for decision support system – Predicate Logic. Brief discussion on the various aspects of Predicate Logic has been discussed in this chapter. Chapter 5 is a very brief introduction to Fuzzy concept. Fuzzy concept could be elaborated much more but the chapter has not been extended much since the purpose of this chapter is to present a glimpse of Fuzzy Theory to the readers. Thus, only the basic concepts and Fuzzificationdefuzzification have been discussed in this chapter. Chapter 6 focuses on network tools. In this chapter, Program Evaluation and Review Technique (PERT) and Critical Path Method (CPM) have been discussed briefly numerical examples. In addition to PERT/CPM, Gantt chart, Milestone chart, Graphical Evaluation and Review Technique (GERT), and its variations have been discussed in details with sufficient practical numerical examples and cases. Chapter 7 describes the concept of Petri net. Although Petri net is not widely applied but Petri net is still a very effective tool for network simulation, as evident from the existing literature. In addition to the basic introduction to Petri net, different types of Petri nets have also been discussed. Continuous and hybrid Petri nets have also been discussed in this chapter. Additionally, basic programming and modeling constructs or Petri net have been described. Each of the types and concepts of Petri net has been explained with sufficient example. Chapter 8 explains the concept of Markov Chain (MC). After simple introduction to MC, transition probability and different methods for calculating transition probability have been

Introduction

5

presented. Then, various classifications of MC have been introduced with examples followed by some other aspects. Chapter 9 discusses Case-Based Reasoning (CBR). After an introduction and historical background, the basic elements and basic method of CBR are discussed. At last, various advanced methods for CBR have been elaborated. Chapter 10 introduces different Multi-Criteria Decision Analysis (MCDA) techniques. Each method has been explained based on a case study as presented at the beginning of the chapter. Different methods for comparison among these techniques have also been presented based on the existing literature. Chapter 11 presents some other techniques for DSS, such as Linear Programming, Simulation, Big Data Analytics, and Internet of Things since these are very important tools and techniques based on the latest trends in the business world. Under Linear Programming, the methods depicted are Simplex method, Two-Phase method, Big-M method, some exceptional conditions for Linear Programming, Dual Simplex method, and Linear Fractional Programming. Chapter 12 is a chapter on SDSS. After the basic introduction, this chapter introduces the components of SDSS, different SDSS software, and GRASS GIS software. Chapter 13 briefs the concept of data warehousing and data mining. Different types of data warehouses have been presented followed by different methods under data mining. Chapter 14 is an elaborate chapter on Intelligent Decision Support System (IDSS). Although all the aspects of IDSS could not be discussed, but many effective and practiced methods are discussed under the heading of Enterprise Information System, Knowledge Management, and Artificial Intelligence. Chapter 15 introduces different software. Some of the software has been described in brief with a significant number of required screenshots. Chapter 16 is a very brief chapter that shows the probable future of decision support system.

2

Decision Tree

2.1  BASIC CONCEPT The term “decision tree” is such because of the fact that it is a tree with branches representing various decisions to be taken along with the respective payoffs or expected monetary value (EMV). Based on either the EMVs or the costs for the branches, the branch with the highest EMV or the lowest cost is chosen as the optimum decision branch. This indicates that decision tree is basically a graphical representation of various alternatives (or branches), events, and various decisions (represented by the leaves of the tree). Because of its visual representation, a medium-complex situation can be visualized and decision-making becomes easier. The various symbols as used in decision tree are given in Figure 2.1. The square symbol represents the decision points, and the circle symbol represents the events that can be defined as a kind of representation of uncertainty since any of the events may happen. For example, if a student decides to appear for a particular competitive examination (decision point), then the results of taking the examination may be either pass or fail. The result is uncertain and is represented by event. The branches of decision tree represent various alternatives. For example, the student can decide to take any of the selected three competitive examinations. These three competitive examinations may represent three alternatives, and therefore, three branches of a decision tree can be drawn. The general structure of a decision tree is provided in Figure 2.2. The basic characteristics of a decision tree can be delineated through the following points.

1. Decision tree is a graphical tool 2. Decision tree is also an optimization tool 3. Decision points in decision tree are represented by squares, and events are represented by circles. 4. Decision tree can be formed by applying various induction algorithms from a dataset. Each algorithm has some stopping criterion. 5. Decision tree can be used for classification purpose If the probabilities of various events are known, then the probabilities can be written along with the corresponding event. However, the various advantages of decision trees are delineated through the following points. The disadvantages will be enlisted at the end of the next section. Some examples

FIGURE 2.1  Various symbols as used in decision tree. DOI: 10.1201/9781003307655-2

7

8

Decision Support System

FIGURE 2.2  General structure of a decision tree.

can clarify the construction of decision tree. Two separate examples of decision tree construction are provided below.

1. Both numeric and nominal data can be handled by decision tree 2. Dataset with errors can also be handled by decision tree 3. Dataset with missing values can also be handled by decision tree 4. Decision tree is especially applicable when classification cost is very high. Example 1 A businessman wants to extend his business and wants to open showroom at any of the three places – in a shopping mall, in a famous market, or in a highly populated locality. The target is to reach as many customers as possible. However, the cost incurred is also a major factor to consider. If the showroom is opened in a mall, then the number of estimated potential customers reached will be 20,000 approximately but the investment will be 80 lakhs approximately. For showroom in a popular market, the expected number of customers reached will be 50,000 approximately and the cost will be around 40 lakhs. For the separate showroom in a populated locality, the expected number of customers to reach will be 5000 approximately and the cost is 36 lakhs. The chances of success for these three options are 60% for shopping complex, 75% for market, and 40% for populated locality. If failed, then the expected customers reached for these three cases will be 1000, 10,000, 500 customers for shopping complex, market, and populated locality, respectively. Draw a decision tree for this scenario.

Answer: The respective decision tree is given in Figure 2.3. The expected benefit obtained can be calculated in terms of either the number of customers or in term of cost.

Decision Tree

9

FIGURE 2.3  Decision tree for Example 1.

For the number of customers to reach, the following calculations are applicable. The calculations given below indicate that the highest expected number of customers to reach is for ‘market’. However, the calculation in terms of cost may indicate any other alternative as the best alternative. For ‘shopping complex’ – Expected number of customers to reach is = 20000 × 60% + 1000 × 40% = 12400 For ‘market’ – Expected number of customers to reach is = 50000 × 75% + 10000 × 25% = 40000 For ‘populated locality’ – Expected number of customers to reach is = 5000 × 40% + 500 × 60% = 2300 However, decision trees can be constructed from real-life dataset as shown in the next section.

2.2  ALGORITHMS FOR CONSTRUCTION OF DECISION TREE There are many algorithms that can be used to construct decision trees. This section shows the kind of algorithms for constructing decision trees known as induction algorithms. Significant number of induction algorithms is observed in the existing literature. Some of those are: • • • • • • • • • •

ID3 C4.5 CART CHAID QUEST CAL5 FACT LMDT PUBLIC MARS

10

Decision Support System

These algorithms apply various splitting criteria based on which a particular attribute is split based on its set of values. The algorithms are terminated by applying different stopping or termination criteria. Some of the splitting criteria are enlisted below:

1. Impurity-based criteria 2. Information gain 3. Gini index 4. Likelihood-ratio chi-squared statistics 5. DKM criteria 6. Normalized impurity-based criteria 7. Gain ratio 8. Distance measure 9. Binary criterion 10. Twoing criterion 11. Orthogonal criterion 12. Kolmogorov–Smirnov Criterion 13. AUC Splitting criteria

The following part of this section explains the applications of two of the famous induction algorithms called ID3 and C4.5. Consider the following case study. A local travel agency plans for some local travelling packages for reviving its business after a major economic setback. The agency identifies a total of 12 different local travelling spots to travel during different months of the year. The dataset shows that the attribute ‘Journey’ depends on the other five attributes. Here, the attributes – Temperature, Humidity, Distance, Expense, and Outlook. The attribute ‘Journey’ is the dependent attribute, and thus, the entropy or the information content will be calculated for this dependent attribute first. The global entropy is always based on the dependent attribute. The total number of samples as observed in the dataset is D = 12. The dataset required is given in Table 2.1. The dataset depicts that all these attributes are nominal attributes. However, the construction algorithms are applicable to both nominal and numeric attributes. The following two subsections will see the applications of two algorithms, ID3 and C4.5, on the dataset as provided in Table 2.1. Both ID3 and C4.5 are based on the information content in the data or entropy. In other words, ID3 constructs the decision tree based on the information gain for each of the attributes whereas C4.5 constructs the decision tree based on the gain ratio. Both the methods TABLE 2.1 Dataset for the Travel Agency Month 1 2 3 4 5 6 7 8 9 10 11 12

Temperature

Humidity

Distance

Expense

Cold Mild Mild Hot Hot Hot Mild Mild Mild Mild Cold Cold

Dry Dry Moderate Moderate Humid Humid Humid Humid Moderate Moderate Dry Dry

Near Near Near Moderate Moderate Far Moderate Moderate Far Far Far Far

Low Low Low Low High Low Low Low High Low Low Moderate

Outlook Sunny Sunny Sunny Sunny Sunny Rainy Rainy Rainy Overcast Sunny Overcast Sunny

Journey Yes Yes Yes Yes No No No No No Yes Yes Yes

11

Decision Tree

are depicted in the next two subsections through the numerical example based on the dataset as provided in Table 2.1.

2.2.1  ID3 ID3, at first, calculates the entropy of the entire dataset. Then, individual entropy for each of the attributes is calculated. The final gain of the attribute A is calculated by expression (2.1).

Gain( A) = Entropy(Dataset) −

∑ Proportion of each value of A Entropy of each value (2.1)

Expression (2.1) is presented in lucid style for the easy readability and understanding of the reader, without using any mathematical symbol except the summation symbol. For calculating the entropy of the entire dataset, look at the values of the dependent variable ‘Journey’. The values of Journey are: Values(Journey) = Yes, No (2.2)



Here, the number of times ‘Yes’ appears is 7, and the number of times ‘No’ appears is 5 [7+,5−]. The entropy of the entire dataset is:



Entropy(Dataset) = −

=−

∑ Proportion of each value × log (Proportion)  (2.3) 2

7 7 5 5 log2   − log2   = 0.9798  12  12  12  12

(2.4)

Next, the entropies for each of the attributes are calculated below: 2.2.1.1 Temperature Values of attribute ‘Temperature’ are:

Values(Temperature) = Cold, Mild,Hot

For value, ‘Cold’, the number of times ‘Cold’ appears is 3 times among which ‘Yes’ for ‘Journey’ appears 3 times and ‘No’ for Journey does not appear, that is, ‘No’ appears 0 times, denoted by [3+,0 −]. Thus, the entropy for ‘Cold’ is:

3 3 0 0 Entropy(Cold) = − log2   − log2   = 0 (2.5)  3 3  3 3

For value, ‘Mild’, the number of times ‘Mild’ appears is 6 times among which ‘Yes’ for ‘Journey’ appears 3 times and ‘No’ for ‘Journey’ appears 3 times, denoted by [3+,3−]. Thus, the entropy for ‘Mild’ is:

3 3 3 3 Entropy(Mild) = − log2   − log2   = 1    6 6 6 6

(2.6)

For value, ‘Hot’, the number of times ‘Hot’ appears is 3 times among which ‘Yes’ for ‘Journey’ appears 1 time and ‘No’ for ‘Journey’ appears 2 times, denoted by [1+,2−]. Thus, the entropy for ‘Hot’ is:

12



Decision Support System

1 1 2 2 Entropy(Hot) = − log2   − log2   = 0.9149 (2.7)    3 3 3 3

Thus, the information gain for attribute ‘Temperature’ is: Gain(Dataset,Temperature) = Entropy(Dataset) −

∑ Proportion of each value of A × Entropy of each value

= 0.9798 −

2.2.1.2 Humidity Values of attribute ‘Humidity’ are:

3 6 3 × 0 − × 1 − × 0.9149 = 0.2511 12 12 12

(2.8)

Values(Humidity) = Dry, Moderate,Humid

For value, ‘Dry’, the number of times ‘Dry’ appears is 4 times among which ‘Yes’ for ‘Journey’ appears 4 times and ‘No’ for Journey does not appear, that is, ‘No’ appears 0 times, denoted by [4 +,0 −]. Thus, the entropy for ‘Dry’ is:

4 4 0 0 Entropy(Dry) = − log2   − log2   = 0    4 4 4 4

(2.9)

For value, ‘Moderate’, the number of times ‘Moderate’ appears is 4 times among which ‘Yes’ for ‘Journey’ appears 3 times and ‘No’ for ‘Journey’ appears once, denoted by [3+,1−]. Thus, the entropy for ‘Moderate’ is:

3 3 1 1 Entropy(Moderate) = − log2   − log2   = 0.8113 (2.10)  4 4  4 4

For value, ‘Humid’, the number of times ‘Humid’ appears is 4 times among which ‘Yes’ for ‘Journey’ appears 0 time and ‘No’ for ‘Journey’ appears 4 times, denoted by [0 +,4 −]. Thus, the entropy for ‘Humid’ is:

0 0 4 4 Entropy(Humid) = − log2   − log2   = 0 (2.11)  4 4  4 4

Thus, the information gain for attribute ‘Humidity’ is: Gain(Dataset,Humidity) = Entropy(Dataset) −

∑ Proportion of each value of A × Entropy of each value

= 0.9798 −

4 4 4 × 0 − × 0.8113 − × 0 = 0.7094 12 12 12

(2.12)

2.2.1.3 Distance Values of attribute ‘Distance’ are:

Values(Distance) = Near, Moderate,Far

For value, ‘Near’, the number of times ‘Near’ appears is 3 times among which ‘Yes’ for ‘Journey’ appears 3 times and ‘No’ for Journey does not appear, that is, ‘No’ appears 0 times, denoted by [3+,0 −]. Thus, the entropy for ‘Near’ is:

3 3 0 0 Entropy(Near) = − log2   − log2   = 0 (2.13)    3 3 3 3

13

Decision Tree

For value, ‘Moderate’, the number of times ‘Moderate’ appears is 4 times among which ‘Yes’ for ‘Journey’ appears once and ‘No’ for ‘Journey’ appears 3 times, denoted by [1+,3−]. Thus, the entropy for ‘Moderate’ is:

1 1 3 3 Entropy(Moderate) = − log2   − log2   = 0.8113 (2.14)    4 4 4 4

For value, ‘Far’, the number of times ‘Far’ appears is 5 times among which ‘Yes’ for ‘Journey’ appears 3 times and ‘No’ for ‘Journey’ appears 2 times, denoted by [3+,2−]. Thus, the entropy for ‘Far’ is:

3 3 2 2 Entropy(Far) = − log2   − log2   = 0.971 (2.15)    5 5 5 5

Thus, the information gain for attribute ‘Distance’ is: Gain(Dataset,Distance) = Entropy(Dataset) −

∑ Proportion of each value of A × Entropy of each value

= 0.9798 −

2.2.1.4 Expense Values of attribute ‘Expense’ are:

(2.16)

3 4 5 × 0 − × 0.8113 − × 0.971 = 0.3048 12 12 12

Values(Expense) = Low,High, Moderate

For value, ‘Low’, the number of times ‘Low’ appears is 9 times among which ‘Yes’ for ‘Journey’ appears 6 times and ‘No’ for Journey appears 3 times, denoted by [6+,3−]. Thus, the entropy for ‘Low’ is:

6 6 3 3 Entropy(Low) = − log2   − log2   = 0.9149 (2.17)  9 9  9 9

For value, ‘High’, the number of times ‘High’ appears is 2 times among which ‘Yes’ for ‘Journey’ does not appear and ‘No’ for ‘Journey’ appears 2 times, denoted by [0 +,2−]. Thus, the entropy for ‘High’ is:

0 0 2 2 Entropy(High) = − log2   − log2   = 0 (2.18)  2 2  2 2

For value, ‘Moderate’, the number of times ‘Moderate’ appears is 1 time among which ‘Yes’ for ‘Journey’ appears once and ‘No’ for ‘Journey’ does not appear, denoted by [1+,0 −]. Thus, the entropy for ‘Moderate’ is:

1 1 0 0 Entropy(Moderate) = − log2   − log2   = 0 (2.19)  1 1  1 1

Thus, the information gain for attribute ‘Expense’ is: Gain(Dataset,Expense) = Entropy(Dataset) −

∑ Proportion of each value of A × Entropyof each value

= 0.9798 −

9 2 1 × 0.9149 − × 0 − × 0 = 0.2936 12 12 12

(2.20)

14

Decision Support System

2.2.1.5 Outlook Values of attribute ‘Outlook’ are:

Values(Outlook) = Sunny, Rainy,Overcast

For value, ‘Sunny’, the number of times ‘Sunny’ appears is 7 times among which ‘Yes’ for ‘Journey’ appears 6 times and ‘No’ for Journey appears once, denoted by [6+,1−]. Thus, the entropy for ‘Sunny’ is:

6 6 1 1 Entropy(Sunny) = − log2   − log2   = 0.5845 (2.21)  7 7  7 7

For value, ‘Rainy’, the number of times ‘Rainy’ appears is 3 times among which ‘Yes’ for ‘Journey’ does not appear and ‘No’ for ‘Journey’ appears 3 times, denoted by [0 +,3−]. Thus, the entropy for ‘Rainy’ is:

0 0 3 3 Entropy(Rainy) = − log2   − log2   = 0 (2.22)  3 3  3 3

For value, ‘Overcast’, the number of times ‘Overcast’ appears is 2 times among which ‘Yes’ for ‘Journey’ appears once and ‘No’ for ‘Journey’ appears once, denoted by [1+,1−]. Thus, the entropy for ‘Overcast’ is:

1 1 1 1 Entropy(Overcast) = − log2   − log2   = 1 (2.23)    2 2 2 2

Thus, the information gain for attribute ‘Outlook’ is: Gain(Dataset,Outlook) = Entropy(Dataset) −

∑ Proportion of each value of A × Entropy of each value

= 0.9798 −

3 2 7 × 0.5845 − × 0 − × 1 = 0.4721 12 12 12

(2.24)

In gist, the attributes and the respective gains are shown in Table 2.2. Maximum gain from Table 2.2 is 0.7094 for attribute, Humidity. Thus, the root node will be ‘Humidity’. The values of Humidity are Dry, Moderate, and Humid. ‘Dry’ appears 4 times [4 +,0 −]; ‘Moderate’ appears 4 times [3+,1−]; ‘Humid’ appears 4 times [0 +,4 −]. The value of ‘Journey’ for all these 4 times’ for ‘Dry’ is ‘Yes’ whereas the value of ‘Journey’ for all these 4 times’ for ‘Humid’ is ‘No’. Thus, the leaf node for the branch with ‘Dry’ and ‘Humid’ will be ‘Yes’ and ‘No’, respectively. The resultant decision tree is shown in Figure 2.4. Figure 2.4 shows that the decision tree is incomplete due to the incomplete branch for ‘Moderate’. The bracketed symbols represent the month numbers. However, for ‘Moderate’ with months M3, M4, M9, and M10, neither ‘Yes’ nor ‘No’ can be the leaf. Thus, for these months only, the data from Table 2.1 are shown in Table 2.3. In order to find out the next member in the branch, the same procedure as before should be repeated based on the data in Table 2.3. Based on the values of ‘Journey’ [3+,1−] for the 4 months, the entropy of the entire dataset as shown in Table 2.3 is given below. Next, the entropies for each of the attributes are calculated below.



3 3 1 1 Entropy(Dataset) = − log2   − log2   = 0.8113 (2.25)  4 4  4 4

15

Decision Tree

TABLE 2.2 Gains of Attributes Attribute

Gain

Temperature Humidity Distance Expense Outlook

0.2511 0.7094 0.3048 0.2936 0.4721

FIGURE 2.4  Incomplete decision tree.

TABLE 2.3 Dataset for ‘Moderate’ Value for ‘Humidity’ Month 3 4 9 10

Temperature

Humidity

Distance

Expense

Outlook

Journey

Mild Hot Mild Mild

Moderate Moderate Moderate Moderate

Near Moderate Far Far

Low Low High Low

Sunny Sunny Overcast Sunny

Yes Yes No Yes

2.2.1.6 Temperature Values of attribute ‘Temperature’ are:

Values(Temperature) = Mild,Hot

For value, ‘Mild’, the number of times ‘Mild’ appears is 3 times among which ‘Yes’ for ‘Journey’ appears 2 times and ‘No’ for ‘Journey’ appears once, denoted by [2+,1−]. Thus, the entropy for ‘Mild’ is:

2 2 1 1 Entropy(Mild) = − log2   − log2   = 0.9149 (2.26)  3 3  3 3

For value, ‘Hot’, the number of times ‘Hot’ appears is once in which ‘Yes’ for ‘Journey’ appears once and ‘No’ for ‘Journey’ does not appear, denoted by [1+,0 −]. Thus, the entropy for ‘Hot’ is:

16



Decision Support System

1 1 0 0 Entropy(Hot) = − log2   − log2   = 0    1 1 1 1

(2.27)

Thus, the information gain for attribute ‘Temperature’ is: Gain(Dataset,Temperature) = Entropy(Dataset) −

∑ Proportion of each value of A × Entropy of each value

3 1 (2.28) = 0.8113 − × 0.9189 − × 0 = 0.1251 4 4 2.2.1.7 Humidity Values of attribute ‘Humidity’ are: Values(Humidity) = Moderate



For value, ‘Moderate’, the number of times ‘Moderate’ appears is 4 times among which ‘Yes’ for ‘Journey’ appears 3 times and ‘No’ for Journey appears once, denoted by [3+,1−]. Thus, the entropy for ‘Moderate’ is:

3 3 1 1 Entropy(Moderate) = − log2   − log2   = 0.8113 (2.29)    4 4 4 4

Thus, the information gain for attribute ‘Humidity’ is: Gain(Dataset,Humidity) = Entropy(Dataset) −

∑ Proportion of each value of A × Entropy of each value

= 0.8113 −

4 × 0.8113 = 0 4

(2.30)

2.2.1.8 Distance Values of attribute ‘Distance’ are:

Values(Distance) = Near, Moderate,Far

For value, ‘Near’, the number of times ‘Near’ appears is 1 time among which ‘Yes’ for ‘Journey’ appears once and ‘No’ for Journey does not appear, that is, ‘No’ appears 0 times, denoted by [1+,0 −]. Thus, the entropy for ‘Near’ is:

1 1 0 0 Entropy(Near) = − log2   − log2   = 0 (2.31)    1 1 1 1

For value, ‘Moderate’, the number of times ‘Moderate’ appears is 1 time among which ‘Yes’ for ‘Journey’ appears once and ‘No’ for ‘Journey’ does not appear, that is, ‘No’ appears 0 times, denoted by [1+,0 −]. Thus, the entropy for ‘Moderate’ is:

1 1 0 0 Entropy(Moderate) = − log2   − log2   = 0 (2.32)  1 1  1 1

17

Decision Tree

For value, ‘Far’, the number of times ‘Far’ appears is 2 times among which ‘Yes’ for ‘Journey’ appears once and ‘No’ for ‘Journey’ appears once, denoted by [1+,1−]. Thus, the entropy for ‘Far’ is:

1 1 1 1 Entropy(Far) = − log2   − log2   = 1 (2.33)  2 2  2 2

Thus, the information gain for attribute ‘Distance’ is: Gain(Dataset,Distance) = Entropy(Dataset) −

∑ Proportionof each valueof A × Entropyof each value

= 0.8113 −

1 1 2 × 0 − × 0 − × 1 = 0.3113 4 4 4

(2.34)

2.2.1.9 Expense Values of attribute ‘Expense’ are: Values(Expense) = Low,High



For value, ‘Low’, the number of times ‘Low’ appears is 3 times among which ‘Yes’ for ‘Journey’ appears 3 times and ‘No’ for Journey appears 0 times, denoted by [3+,0 −]. Thus, the entropy for ‘Low’ is:

3 3 0 0 Entropy(Low) = − log2   − log2   = 0 (2.35)  3 3  3 3

For value, ‘High’, the number of times ‘High’ appears is once in which ‘Yes’ for ‘Journey’ does not appear and ‘No’ for ‘Journey’ appears once, denoted by [0 +,1−]. Thus, the entropy for ‘High’ is:

0 0 1 1 Entropy(High) = − log2   − log2   = 0 (2.36)    1 1 1 1

Thus, the information gain for attribute ‘Expense’ is: Gain(Dataset,Expense) = Entropy(Dataset) −

∑ Proportion of each value of A × Entropy of each value

= 0.8113 −

3 1 2 × 0 − × 0 − × 1 = 0.8113 4 4 4

(2.37)

2.2.1.10 Outlook Values of attribute ‘Outlook’ are:

Values(Outlook) = Sunny,Overcast

For value, ‘Sunny’, the number of times ‘Sunny’ appears is 3 times among which ‘Yes’ for ‘Journey’ appears 3 times and ‘No’ for Journey appears 0 time, denoted by [3+,0 −]. Thus, the entropy for ‘Sunny’ is:

3 3 0 0 Entropy(Sunny) = − log2   − log2   = 0 (2.38)  3 3  3 3

18

Decision Support System

For value, ‘Overcast’, the number of times ‘Overcast’ appears is 1 time among which ‘Yes’ for ‘Journey’ does not appear and ‘No’ for ‘Journey’ appears once, denoted by [0 +,1−]. Thus, the entropy for ‘Overcast’ is:

0 0 1 1 Entropy(Overcast) = − log2   − log2   = 0 (2.39)  1 1  1 1

Thus, the information gain for attribute ‘Outlook’ is: Gain(Dataset,Outlook) = Entropy(Dataset) −

∑ Proportion of each value of A ×  Entropy of each value

(2.40) 1 3 × 0 − × 0 = 0.8113 4 4 In gist, the attributes and the respective gains are shown in Table 2.4. Maximum gain from Table 2.4 is 0.8113 for attribute, Expense and Outlook. If Expense is taken in order to break the tie, as the next node in the branch, then the decision tree will look as the one as shown in Figure 2.5. = 0.8113 −

TABLE 2.4 Attribute – Gain Final Values Attribute Temperature Humidity Distance Expense Outlook

FIGURE 2.5  Final decision tree.

Gain 0.1251 0 0.3113 0.8113 0.8113

19

Decision Tree

2.2.2 C4.5 This section applies C4.5 on the same dataset as shown in Table 2.1. The gains for the attributes are shown in Table 2.2. Here, SplitInfo for each attribute will have to be calculated by expression (40). SplitInfo(Attribute) =



∑ Proportionof valuesof theattribute × log (Proportionof valuesof theattribute) 2

(2.40)

The values of SplitInfo for different attributes are given below.

SplitInfo(Temperature) = −

3 3 6 6 3 3 log2   − log2   − log2   = 1.5 (2.41)  12  12  12  12  12  12

[Since, for the attribute ‘Temperature’, the values are: Cold, Mild, and Hot. ‘Cold’ appears 3 times; ‘Mild’ appears 6 times; ‘Hot’ appears 3 times]

SplitInfo(Humidity) = −

4 4 4 4 4 4 log2   − log2   − log2   = 1.5830 (2.42)  12  12  12  12  12  12

[Since, for the attribute ‘Humidity’, the values are: Dry, Moderate, Humid. ‘Dry’ appears 4 times; ‘Moderate’ appears 4 times; ‘Humid’ appears 4 times]

SplitInfo(Distance) = −

3 3 4 4 5 5 log2   − log2   − log2   = 1.554 (2.43)  12  12  12  12  12  12

[Since, for the attribute ‘Distance’, the values are: Near, Moderate, Far. ‘Near’ appears 3 times; ‘Moderate’ appears 4 times; ‘Far’ appears 5 times]

SplitInfo(Expense) = −

9 9 2 2 1 1 log2   − log2   − log2   = 1.0406 (2.44)  12  12  12  12  12  12

[Since, for the attribute ‘Expense’, the values are: Low, Moderate, High. ‘Low’ appears 9 times; ‘High’ appears 2 times; ‘Moderate’ appears once.]

SplitInfo(Outlook) = −

7 7 3 3 2 2 log2   − log2   − log2   = 1.3858 (2.45)  12  12  12  12  12  12

[Since, for the attribute ‘Outlook’, the values are: Sunny, Overcast, and Rainy. ‘Sunny’ appears 7 times; ‘Rainy’ appears 3 times; ‘Overcast’ appears 2 times.] The gain ratio for each of these attributes is given below. The gins for the attributes – Temperature, Humidity, Distance, Expense, and Outlook are given in expression (2.8), (2.12), (2.16), (2.20), and (2.24), respectively.

GR(Temperature) =

Gain 0.2511 = = 0.1674 (2.46) SplitInfo(Temperature) 1.5



GR(Humidity) =

Gain 0.7094 = = 0.4481 (2.47) SplitInfo(Humidity) 1.5830



GR(Distance) =

Gain 0.3048 = = 0.1961 (2.48) SplitInfo(Distance) 1.554

20

Decision Support System



GR(Expense) =

Gain 0.2936 = = 0.2821 (2.49) SplitInfo(Expense) 1.0406



GR(Outlook) =

Gain 0.4721 = = 0.3409 (2.50) SplitInfo(Outlook) 1.3848

Among these gain ratios, the highest is 0.4481 for the attribute, ‘Humidity’. Thus, ‘Humidity’ is taken to be the root node. The values of Humidity are Dry, Moderate, and Humid. ‘Dry’ appears 4 times [4 +,0 −]; ‘Moderate’ appears 4 times [3+,1−]; ‘Humid’ appears 4 times [0 +,4 −]. The value of ‘Journey’ for all these 4 times’ for ‘Dry’ is ‘Yes’ whereas the value of ‘Journey’ for all these 4 times’ for ‘Humid’ is ‘No’. Thus, the leaf node for the branch with ‘Dry’ and ‘Humid’ will be ‘Yes’ and ‘No’, respectively. The resultant decision tree is shown in Figure 2.4. Figure 2.4 shows that the decision tree is incomplete due to the incomplete branch for ‘Moderate’. The bracketed symbols represent the month numbers. However, for ‘Moderate’ with months M3, M4, M9, and M10, neither ‘Yes’ nor ‘No’ can be the leaf. Thus, for these months only, the data from Table 2.1 are shown in Table 2.3. The remaining part of the execution of the algorithm is similar except the fact that instead of calculating gain only, here gain ratio will have to be calculated. The final decision tree is same in Figure 2.5. The next section discusses the average time complexity of decision trees.

2.3  TIME COMPLEXITY OF DECISION TREE Decision tree is one of the most powerful learning techniques, as evident from the existing literature. With the increasing amount of data in the practical world, the application has become more and more relevant. The calculation of complexity of the decision trees has become a major issue. However, very few works have been done on the complexity analysis of decision trees. In this chapter, two decision algorithms have been discussed, known as ID3 and C4.5. There are many other algorithms as mentioned before. These algorithms are called induction algorithms, and the most famous induction algorithms are ID3, C4.5, and CART. However, the computational complexity that is relevant to this section is of two types – sample complexity and computational complexity. Theoretical estimation of resources used is essential for computational complexity. Computational complexity can be expressed in terms of runtime analysis. Runtime analysis is dependent on the machine configuration. In order to avoid machine dependence, asymptotic notation for algorithms is applied. Asymptotic analysis depends on the size of input of algorithms. This type of analysis takes a pessimistic approach in the sense that such approach considers worst case scenario. For C4.5 algorithm, the algorithm takes recursive and divide-and-conquer approach. C4.5 is basically dependent on the computation of entropy-based gain ratios for the attributes and entropy of input variables. The estimated computational complexity for calculating the probability for an attribute is O(n), where n is the total number of attributes; complexity for computation for each input attribute is O(n log2 n); for all the attributes, the total complexity is O(mn log2 n). Combining these three together, the total complexity is (Sani et al., 2018):

Complexity(C 4.5) = O(n) + O(n log2 n) + O(mn log2 n) (2.51)

The computational complexities of ID3 and CART are also similar since the difference only lies in splitting criteria – splitting criterion for ID3 is gains of attributes; splitting criterion for C4.5 is gain ratio; and splitting criterion for CART is Gini index. For detailed understanding of the analysis and computation procedures of various complexities, the work of Martin and Hirschberg (1995) may be consulted.

21

Decision Tree

TABLE 2.5 Examples of Applications of Decision Tree Article Al-Obeidat et al. (2015)

Zhang et al. (2018) Mantas and Abellán (2014) Liu et al. (2017) Nowak and Nowaka (2013) De Caigny et al. (2018) Ebtehaj et al. (2018) Nourani and Molajou (2017) Li et al. (2013) Beucher et al. (2019) Bamber and Evans (2016) Schetinin et al. (2018) Banerjee and Chowdhury (2015) Zhou and Wang (2012)

Application Applied an improved fuzzy decision tree as a classification approach for satellite and airborne images that are used in Geographical Information system (GIS). The algorithm is basically based on C4.5 decision tree algorithm. Combined Multi-Layer Perceptron Neural Network, Support Vector Machine and Decision tree approaches as an application to soil classification in China. Applied decision tree approach to get rid of noise in noisy data based on classification approach of decision trees. Applied decision tree as a data mining approach to analyze loss of grains in storage. Combined Multi-Criteria Decision Making with decision tree for project planning application. Applied decision tree in customer churn prediction. Combined decision tree with Artificial Neural Network for prediction of sediment transport in clean pipes. Applied hybrid decision tree for drought monitoring. Combined Artificial Neural Network with decision tree to predict petroleum production possibility. Combined Artificial Neural Network with decision tree to predict soil drainage classes in Denmark. Performed decision tree analysis for anesthetic care in Obstetrics. Applied decision tree model to measure uncertainty in trauma severity. Combined Case-Based Reasoning (CBR) with decision tree and applied the hybrid algorithm to detect retinal abnormalities. Applied decision tree in pavement maintenance and rehabilitation.

2.4  VARIOUS APPLICATIONS OF DECISION TREE Decision tree has been applied in varieties of application as decision analysis, classification or learning technique. For example, Kamadi et al. (2016) combined SLIQ decision tree approach with principal component analysis for the diagnosis of diabetic patients; Gao et al. (2018) applied C4.5 decision tree algorithm for oil refinery scheduling; Smith and Metaxas (2018) used decision tree approach to investigate into the connectivity issues in marine protected area network. Table 2.5 shows some other applications of decision trees. Table 2.5 shows significant variety in terms of application fields for decision trees.

2.5 CONCLUSION This chapter has shown various aspects of decision tree. Starting with the presentation of basic concepts, the chapter shows how to construct a decision tree. Various algorithms from real data are mentioned, and the detailed algorithms, ID3 and C4.5, are depicted with detailed numerical example based on a case study. After this, a very brief outline of the complexity issue for decision trees is discussed followed by various applications of decision trees in different fields of study. The readers are expected to get an overview of various aspects of decision tree from this chapter.

REFERENCES Al-Obeidat, F., Al-Taani, A. T., Belacel, N., Feltrin, L., Banerjee, N. (2015). A fuzzy decision tree for processing satellite images and landsat data. Procedia Computer Science, 52, 1192–1197.

22

Decision Support System

Bamber, J. H., Evans, S. A. (2016). The value of decision tree analysis in planning anaesthetic care in obstetrics. International Journal of Obstetric Anesthesia, 27, 55–61. Banerjee, S., Chowdhury, A. R. (2015). Case based reasoning in the detection of retinal abnormalities using decision trees. Procedia Computer Science, 46, 402–408. Beucher, A., Møller, A. B., Greve, M. H. (2019). Artificial neural networks and decision tree classification for predicting soil drainage classes in Denmark. Geoderma, 352, 351–359. De Caigny, A., Coussement, K., De Bock, K. W. (2018). A new hybrid classification algorithm for customer churn prediction based on logistic regression and decision trees. European Journal of Operational Research, 269(2), 760–772. Ebtehaj, I., Bonakdari, H., Zaji, A. H. (2018). A new hybrid decision tree method based on two artificial neural networks for predicting sediment transport in clean pipes. Alexandria Engineering Journal, 57(3), 1783–1795. Gao, X., Huang, D., Jiang, Y., Chen, T. (2018). A decision tree based decomposition method for oil refinery scheduling. Chinese Journal of Chemical Engineering, 26, 1605–1612. Kamadi, V. S. R. P. V., Allam, A. R., Thummala, S. M., Nageswara Rao, P.V. (2016). A computational intelligence technique for the effective diagnosis ofdiabetic patients using principal component analysis (PCA) andmodified fuzzy SLIQ decision tree approach. Applied Soft Computing, 49, 137–145. Li, X., Chan, C. W., Nguyen, H. H. (2013). Application of the neural decision tree approach for prediction of petroleum production. Journal of Petroleum Science and Engineering, 104, 11–16. Liu, X., Li, B., Shen, D., Cao, J., Mao, B. (2017). Analysis of grain storage loss based on decision tree algorithm. Procedia Computer Science, 122, 130–137. Mantas, C. J., Abellán, J. (2014). Analysis and extension of decision trees based on imprecise probabilities: Application on noisy data. Expert Systems with Applications, 41(5), 2514–2525. Martin, J. K., Hirschberg, D. S. (1995). The Time Complexity of Decision Tree Induction. UCI Department of Computer Science, Donald Bren School of Information & Computer Sciences. Nourani, V., Molajou, A. (2017). Application of a hybrid association rules/decision tree model for drought monitoring. Global and Planetary Change, 159, 37–45. Nowak, M., Nowaka, B. (2013). An application of the multiple criteria decision tree in project planning. Procedia Technology, 9, 826–835. Sani, H. M., Lei, C., Neagu, D. (2018). Computational complexity analysis of decision tree algorithms. In: Bramer, M., Petridis, M. (eds) Artificial Intelligence XXXV, SGAI 2018. Lecture Notes in Computer Science. Springer: Cham, vol. 11311: 191–197. Schetinin, V., Jakaite, L., Krzanowski, W. (2018). Bayesian averaging over decision tree models: An application for estimating uncertainty in trauma severity scoring. International Journal of Medical Informatics, 112, 6–14. Smith, J., Metaxas, A. (2018). A decision tree that can address connectivity in the design of Marine Protected Area Networks (MPAn). Marine Policy, 88, 269–278. Zhang, X., Liu, H., Zhang, X., Yu, S., Dou, X., Xie, Y., Wang, N. (2018). Allocate soil individuals to soil classes with topsoil spectral characteristics and decision trees. Geoderma, 320, 12–22. Zhou, G., Wang, L. (2012). Co-location decision tree for enhancing decision-making of pavement maintenance and rehabilitation. Transportation Research Part C: Emerging Technologies, 21(1), 287–305.

3

Decision Table

3.1  BASIC CONCEPT Decision table is a less applied technique for decision-making compared with decision tree, as evident from the existing literature. Decision table started to be used in the early 1960. Table 3.1 shows an overview of the evolution of the use of decision tables. There are total 9038 articles on decision table in Sciencedirect journal database; 2841 articles in Springerlink journal database; 1535 research articles in Taylor and Francis journal database; 51 research articles in IEEE journal database and so on, till December 2021. The interest for research on decision table is because of the fact that decision table is a kind of visualization for conditional ­decision-making. The lack of significant level of research compared with many other decision-­ making tools lies in the difficulty and internal flexibility issues of decision tables. A decision table is divided into four parts – conditions, actions, stub, and entries, as shown in Figure 3.1. In the condition stub section, all the conditions are listed; in the condition entries section, the applicable conditions are marked; in the action stub section, all the actions to be taken for the set of conditions are listed; in the action entries section, all the appropriate actions to take for the appropriate marked conditions are shown. For example, consider the set of if-then-else statements as shown in Figure 3.2. The respective decision table is shown in Table 3.2. Consider another example as shown in Figure 3.3. The respective decision table is shown in Table 3.3. In both Tables 3.2 and 3.3, if a particular decision is true then the symbol ‘T’ is put in the respective column in the condition entry section. For a set of conditions, particular action in the action entry section is marked against the particular actions in the action stub. For example, in Table 3.3, if exhibition happens in Giga Hall (indicated by ‘T’ as first entry in column 1 of condition entry) then if the exhibition is on abstract art (indicated by ‘T’ as second entry in column 1 of condition entry) then if Monday is not a rainy day (indicated by ‘T’ as third entry in column 1 of condition entry), mark a ‘X’ in column 1 of action entry section, against the action “visit exhibition on Monday’ in action stub. Likewise, the other entries in both Tables 3.2 and 3.3 are shown. In this way, the if-then-else logic is easily converted to decision table structure which is easier to read, interpret and understand. There lies the advantage of decision table. However, the basic advantages include the following. • Numerous conditions can be visualized through decision table • Decision table provides more compact presentation

TABLE 3.1 History of Decision Table Approximate Year / Time 1958 Early 1960 1962 1963 1965 1966 and later

Event Experimental use of decision table by General Electric Company through programming language TABSOL Use of decision table programming language by IBM; Inclusion of TABSOL in a procedural language called GECOM FORTRAN enabled decision table programming language FORTAB Popularity of LOBOC programming language Variation of DETAB/65 came into use Further improvements and later advancements described later in this chapter

DOI: 10.1201/9781003307655-3

23

24

Decision Support System

FIGURE 3.1  Structure of decision table.

FIGURE 3.2  A sample set of if-then-else statements.

• Decision table can be modified easily • The fixed format of the decision table facilitates easy identification of different conditions and actions. The disadvantages of decision tables are delineated below. • Decision tables can only represent the cause-and-effect scenario, but unable to represent the flow of logic. • As a result, decision tables only provide the partial solution.

25

Decision Table

TABLE 3.2 Decision Table for Problem in Figure 3.2 Rule 1

Rule 2

Rule 3

Rule 4

Heavy rain Medium rainfall Little rain Cloudy

T F F F

F T F F

F F T F

F F F T

Take raincoat Take umbrella Take towel Try to complete work at the earliest Wait till rain stops

T T T T

F T T T

F T F F

F T F T

F

F

T

F

FIGURE 3.3  Sample Example 2.

TABLE 3.3 Decision Table for Example 2 Rules 1

2

3

4

5

6

7

8

9

10

11

Exhibition in Giga Hall Exhibition on abstract art Monday is not rainy day Tuesday is not rainy day

T T T F

T T F T

T T F F

T F T T

T F T F

T F F T

T F F F

F T T T

F T T F

F T F T

F T F F

Visit exhibition on Monday Visit exhibition on Tuesday Visit exhibition on Sunday Fix a day of tour Take tour of Charles Garden Visit exhibition on day of tour Do not visit exhibition

T F F F F F F

F T F F F F F

F F T F F F F

F F F T T T F

F F F T T T F

F T F T T T F

F F T T T T F

F F F F F F T

F F F F F F T

F F F F F F T

F F F F F F T

26

Decision Support System

Because of the above reasons, decision table has limited applications in practical situations, as a documentation tool. However, decision table can be represented in following three basic ways: Each of these entries is depicted below. • Limited entry decision table • Extended entry decision table • Mixed entry decision table

3.1.1  Limited Entry Decision Table Limited entry decision table is a type of decision table in which the conditions stub exactly mentions the conditions and/or the values. Both Tables 3.2 and 3.3 are examples of complete decision table and decision table in primitive form in which all the condition entries are filled up by T or F, that is, all the conditions are considered. In complete decision table, every situation is considered and all the conditions are satisfied. These tables can be simplified to limited entry decision tree in which only the required condition and the respective entries will be specified. The limited entry form for the example in Table 3.3 is shown in Table 3.4. All the blank entries in the condition rows indicate ‘don’t care’, and blanks in the action entries are the ‘don’t perform’ ones. Any nonblank entry represents ‘perform’. For example, if the exhibition happens in Giga Hall and the exhibition is on abstract art, then irrespective of the values for ‘Monday is not a rainy day’ and “Tuesday is not a rainy day”, the actions will be – “fix a day of tour”, “take a tour of Charles Garden”, and “visit exhibition on day of tour”. Because of this reason, columns 4, 5, 6, and 7 have been replaced by only column 4. Similar reasoning is applicable to columns 8, 9, 10, and 11, which have been replaced by column 5 (column 8 in Table 3.2) in Table 3.4. Thus, total 11 different columns in Table 3.3 have been reduced to five columns in Table 3.4 because of limited entry principle. The conditions for complete decision table are reduced in limited entry decision table. Limited entry decision table also reduces redundancy. For example, rules 4, 5, 6, and 7 were redundant and thus, unnecessary in Table 3.3. This redundancy has been reduced in limited entry decision table. Thus, this type of table is complete and consistent as well. If the actions of two different rules are same then such rules are said to be inconsistent. Here, in Table 3.3, conditions 4, and 5 are said to

TABLE 3.4 Limited Entry Decision Table Rules 1

2

3

4

5

Exhibition in Giga Hall Exhibition on abstract art

T T

T T

T T

T F

F

Monday is not rainy day

T

Tuesday is not rainy day Visit exhibition on Monday Visit exhibition on Tuesday Visit exhibition on Sunday

F T

F

T T T

Fix a day of tour

T

Take tour of Charles Garden

T

Visit exhibition on day of tour

T

Do not visit exhibition

T

27

Decision Table

TABLE 3.5 Example of Decision Table for Rule Merging C1 C2 C3

Rule1

Rule2

Rule3

Rule4

Rule5

Rule6

Rule7

Rule8

Rule9

T F F

F F F

T T F

F F T

T T F

F T T

T F T

F T F

F T F

X

X

A1 A2

X

X X

A3

X

X

X

X

TABLE 3.6 Merging Applied on Table 3.4 Rule 1'

Rule 3'

Rule 5'

Rule6

Rule7

Rule10

C1 C2 C3 A1

T F F

T T T

T T F X

F T T

T F T

F T F

A2

X

A3

X X

X

X

Rule 1 Rule 2 → Rule 1'; Rule 3 Rule 4 → Rule 3'; Rule 5 Rule 8 → Rule 5'

be inconsistent since they are resulting the same action. Merging of rules can thus happen in case of limited entry decision table. Consider the decision table as shown in Table 3.5. In Table 3.5, Rule 1 and Rule 2 can be merged since whatever be the value of C1, if the values of C2 and C3 are ‘F’, then action is A2: X. Similarly, Rule 3 and Rule 4 can be merged since irrespective of the value of C3, if the values of C1 and C2 are ‘T’ and ‘F’, respectively, then action value is A3: X; Rule 5 and Rule 8 can be merged since irrespective of the value of C1, if the values of C2 and C3 are ‘T’ and ‘F’, respectively, then the action value if A1: X. Thus, after these merging, the results are shown in Table 6. Thus, a total of nine rules in Table 3.5 have been reduced to six rules as in Table 3.6.

3.1.2 Extended Entry Decision Table Further simplification of limited entry decision table can happen through extended entry decision table. Extended entry decision table reduces the number of rows, which will be less than that of limited entry decision table. Extended entry decision table is closer to natural human communication. This table is more dependent on the natural base language. For example, the equivalent extended entry decision table for example in Table 3.4 can be presented as in Table 3.7. Table 3.7 shows that the third and fourth conditions in Table 3.4 have been combined to third condition in Table 3.7. Similarly, the first and second actions in the action stub in Table 3.3 have been combined to the first action in the action stub of Table 3.7. The fourth, fifth, and sixth actions in the action stub of Table 3.4 have been combined to fourth action in Table 3.7. As a result, columns have also been reduced. Thus, extended entry decision table is a concise form of limited entry decision table.

28

Decision Support System

TABLE 3.7 Example on Extended Entry Decision Table 1

2

3

4

Exhibition in Giga Hall Exhibition on abstract art

T T

T T

T F

F

Monday or Tuesday is not rainy day

T

Visit exhibition on Monday or Tuesday

T

Visit exhibition on Sunday

T

Visit Charles Garden and exhibition on any other day

T

Do not visit exhibition

T

FIGURE 3.4  Statements for example on mixed entry decision table.

TABLE 3.8 Decision Table for Example in Figure 3.4 Rule1 Amount = 

500

Discount = Hand blender

0%

Rule2 >500 and ≤ 1000 5%

Rule3

Rule4

Rule5

>1000 and ≤ 2000

>2000 and ≤ 5000

> 5000

10%

20%

20% X

3.1.3 Mixed Entry Decision Table In mixed entry decision table, rows in limited entry decision table can be intermixed with that of the extended entry decision table. However, both types of entries cannot be intermixed in the same row. Some restrictions are always imposed in any decision table programming language in order to differentiate these two types of rows. To clarify the concept, consider the programming logic as shown in Figure 3.4. The respective decision table is shown Table 3.8. Table 3.8 uses only one row in the condition stub to specify all the conditions as shown in Figure 3.4. In the action entry, only one row has been used for ‘discount’ and one row for free gift (hand blender). Decision tables can be used almost in all stages of programming – problem definition phage, analysis phase, programming phase, debugging phase, testing phase, documentation phase, and

29

Decision Table

maintenance phase. Each of the abovementioned types of decision tables has certain uses. For example, limited entry decision table can be used for concise representation of the logic behind the programming or plans. Limited entry decision table can be checked for consistency, completeness, and redundancy. Because of this reason, extended entry decision table can be expanded to limited entry decision table for consistency, redundancy, and completeness. The basic advantage of all decision tables is that any error can be identified easily. However, decision tables are mostly suitable for documentation phase. One of the major problems with decision tables is the identification of inconsistency in decision tables and solving those inconsistency issues. Thus, the following section discusses the inconsistency issues for decision tables.

3.2  APPROACHES TO HANDLE INCONSISTENCY FOR DECISION TABLES Two of the basic problems of decision table are – redundancy and inconsistency. A decision table is inconsistent when two or more decision rules have identical condition entries but their action entries are different. For example, consider the example in Table 3.9. In Table 3.9, condition entries for rule 1 and rule 3 are inconsistent since the condition entries for both the rules are T-T-F (identical) but the action entries for rule 1 and rule 3 are different, that is, A1 and A3, respectively. There are different methods for handling inconsistent decision tables. These are delineated next. The existing literature has basically proposed data reduction methods for handling inconsistent decision tables. For example, Deng et al. (2010) has handled inconsistency by the following steps. An example can clarify the method as proposed by Deng et al. (2010).

1. Attribute value reduction 2. Delete inconsistent entries 3. Attribute reduction

Consider the example as shown in Table 3.10. In Table 3.10, x1 , x 2 , x3 , x 4 , x5 , x6 , x 7 , x8 , x9 , x10 represent the rules or individuals; P, Q, R are the condition attributes, and S is decision attribute. The data in Table 3.10 show that the inconsistent individuals are { x1 , x 2 , x3 , x4 , x5 , x6 , x9 , x10 }. First of all, compute U / {P ∪ Q} and U / {P ∪ Q ∪ S } when R is omitted. The relevant table for this is shown in Table 3.11. From Table 3.11, the following results for U / {P ∪ Q} and U / {P ∪ Q ∪ S } are obtained. Observe that, for example, for each member of the set { x1 , x 4 , x6 , x9 }, the values of P and Q are 0, 0, indicating inconsistency. That’s how they have been grouped. Similar concept of grouping into sets is applicable for the other elements as well. It can be inferred now, based on the method by Deng et al. (2010) that the attribute values { x1 , x 4 , x6 , x9 } , { x 2 , x 7 , x10 } , { x3 , x5 } mapping on attribute R are possible core.

TABLE 3.9 Example of Inconsistent Decision Table 1

2

3

4

C1 C2

T F

T T

T F

F

C3

F

A1

T

A2 A3

F

T

T T

30

Decision Support System

TABLE 3.10 Example for Method as Proposed by Deng et al. (2010) U

P

Q

R

S

X1 X2 X3 X4 X5 X6 X7 X8 X9 X10

0 1 0 0 0 0 1 1 0 1

0 0 1 0 1 0 0 1 0 0

1 0 0 1 0 1 1 1 1 0

0 1 1 2 0 1 0 1 2 0

TABLE 3.11 Concise Example for Omitted R U

P

Q

S

X1 X2 X3 X4 X5 X6 X7 X8 X9 X10

0 1 0 0 0 0 1 1 0 1

0 0 1 0 1 0 0 1 0 0

0 1 1 2 0 1 0 1 2 0



U / {P ∪ Q} = {{ x1 , x 4 , x6 , x9 } , { x 2 , x 7 , x10 } , { x3 , x5 } , { x8 }} (3.1)



U / {P ∪ Q ∪ S } = {{x1},{x 2},{x3},{x 4},{x5},{x6},{x 7},{x8},{x9},{x10}} (3.2)

Similarly, compute U / { P ∪ R} and U / {P ∪ R ∪ S } when Q is omitted. The relevant table for this is shown in Table 3.12. From Table 3.12, the following results for U / {P ∪ R} and U / {P ∪ R ∪ S } are obtained. It can be inferred now, based on the method by Deng et al. (2010) that the attribute values {x1 , x4 , x6 , x9 } , {x2 , x10 } , {x3 , x5 } mapping on attribute Q are possible core.

U / { P ∪ R} = {{ x1 , x 4 , x6 , x9 } , { x 2 , x10 } , { x3 , x5 } , { x 7 , x8 }} (3.3)



U / {P ∪ R ∪ S } = {{x1},{x 2},{x3},{x 4 , x9},{x5},{x6},{x 7},{x8},{x10}} (3.4)

Similarly, compute U / {Q ∪ R} and U / {Q ∪ R ∪ S } when P is omitted. The relevant table for this is shown in Table 3.13. From Table 3.13, the following results for U / {Q ∪ R} and U / {Q ∪ R ∪ S } are obtained. It can be inferred now, based on the method by Deng et al. (2010) that the attribute values {x1 , x4 , x6 , x7 , x9 } ,{x2 , x10 } ,{x3 , x5 } mapping on attribute P are possible core. The possible cores and the respective omitted attributes are summarized in Table 3.14.

31

Decision Table

TABLE 3.12 Concise Example for Omitted Q U

P

R

S

X1 X2 X3 X4 X5 X6 X7 X8 X9 X10

0 1 0 0 0 0 1 1 0 1

1 0 0 1 0 1 1 1 1 0

0 1 1 2 0 1 0 1 2 0

TABLE 3.13 Concise Example for Omitted P U

Q

R

S

X1 X2 X3 X4 X5 X6 X7 X8 X9 X10

0 0 1 0 1 0 0 1 0 0

1 0 0 1 0 1 1 1 1 0

0 1 1 2 0 1 0 1 2 0

TABLE 3.14 Summary of Possible Cores Possible Core {x1 , x 4 , x6 , x 7 , x9},{x 2 , x10},{x3 , x5} {x1 , x 4 , x6 , x9},{x 2 , x10},{x3 , x5} {x1 , x 4 , x6 , x9},{x 2 , x 7 , x10},{x3 , x5}



Omitted Attribute P Q R

U / {Q ∪ R} = {{x1 , x 4 , x6 , x 7 , x9},{x 2 , x10},{x3 , x5},{x8}} (3.5) U / {Q ∪ R ∪ S } = {{x1},{x 2},{x3},{x 4 , x9},{x5},{x6},{x 7},{x8},{x10}}

(3.6)

The resultant table after the above calculations is shown in Table 3.15. Next, delete the inconsistent individuals to obtain the results as in Table 3.16. The existing literature has also proposed significant number of reduction methods, such as discernibility matrix and method based on information entropy. Since the focus of this book

32

Decision Support System

TABLE 3.15 Resultant Table after Calculations U

P

Q

R

S

X1 X2 X3 X4 X5 X6 X7 X8 X9 X10

0 1 0 -

0 0 1 1 -

1 0 0 1 -

0 1 1 2 0 1 0 1 2 0

TABLE 3.16 Result after Deleting the Inconsistent Individuals U

P

Q

R

S

X7 X8

-

1

1 -

0 1

is not to detail on the ideas on inconsistent decision table, further exploration on this topic is not being detailed in this chapter. Rather than that, a glimpse of the idea has been presented in this section.

3.3  DECISION TABLE LANGUAGES Decision table programming is an important origin for decision table languages. There are two types of decision table languages based on the purpose and uses – documentation language for human-tohuman interaction and programming language for human-to-machine interaction. Decision table is an element in decision table language. There are also languages that are solely for decision table. For these languages, there is an associated language called outer language. Thus, the topics of concern are the outer language, decision table, and base language. Outer language defines the semantics for decision tables. Decision table syntax contains three different portions, namely, declaration, body of the table, and markers to differentiate rows and columns of decision table. Declarative portion contains declarative elements, such as name and caption ; body of decision table contains rectangular arrays containing strings of characters represented in rows and columns; delimiters between rows and columns are mentioned through markers. The various components of decision table language are depicted below.

3.3.1 Base Language Base language can be represented by the basic if-then-else statements as shown before in this chapter. For example, the example as shown in Figure 3.4 shows the base language by which a decision table can be drawn. However, translating the statements in the problem to the decision table format is not only enough for the proper presentation of a decision table. Therefore, rule selection plays an essential component for decision table construction.

Decision Table

33

3.3.2 Rule Selection Rule selection is very important stage for decision table construction. All the conditions, explicitly or implicitly mentioned, must be identified first, followed by proper evaluation of those conditions. In case of deciding over the number of rules to be included in a decision table, any of the following considerations may be taken into account. For the first and third cases, contradiction may arise and in the second and fourth cases, decision table can be incomplete. • • • •

At most one rule At least one rule Exactly one rule Zero or more rules

3.3.3 Outer Language Outer language indicates directive statements like “Go To” statements so as to indicate the next action statement to be executed for a decision table. Thus, there is an interface between the decision table and decision table language. Among the various decision table programming languages, some are table-dominant languages in which programs are entirely composed of decision tables and another type is procedural language in which decision table works as an element. The example of the first type is rare whereas the examples of the second are substantial. The semantic conventions used in procedural languages are the following. • • • •

Conditions should be Boolean expressions Conditions will be unordered Actions will be proper imperative statements Exactly one rule will be selected for invoking a table

Two different examples of decision table programming include: • Update-Merge Program • Field-Finder Program However, different types of processors are used for different types of decision table languages. Accordingly, the features of the decision table languages may vary. The features can be categorized into the following – semantics, outer language, syntax, implementation, and convenience. The features of the outer languages can be enlisted through the following points. For the detailed understanding of various aspects of decision table languages, the work of Metzner and Barnes (1977) may be consulted. • • • • •

Outer language can contain decision table as linguistic element Decision table represents blocks of imperative codes in the decision table procedural language Use of special verbs for control to and from these blocks This background language is called the host language for the decision table programming Some of the examples of languages are TAB40, BETAB-68, ALGOL 60, FORTAB, System/360 Decision Logic Translator and so on. However, ALGOL 60 is the base language for BETAB-68. • Some of the languages may not have provisions to invoke decision table. • In case of languages which are implemented by preprocessors, base language is augmented. New verbs in that case, used only within decision tables. • In the cases where tables can be either open or close, invocation methods are applied.

34

Decision Support System

• Table-dominant languages generally use executable statements through a number of devices. • Examples of host languages – host language for DETAB/65, DETAP, and DETOC is COBOL, host language for FORTAB and TAB40 is FORTRAN, host language for TABSOL is GECOM.

3.4  DIFFERENT MODIFICATIONS OF DECISION TABLE AND LATEST TREND The recent existing literature shows particular emphasis on the measures of uncertainty for decision tables. Different methods have been used for different researchers. Describing all these methods in details is beyond the periphery of this chapter. Therefore, a glimpse to the latest trend in research on decision table is presented in this section so that the readers can get an overview of the current scenario of the research trend in the field of decision table. For example, Zhang et al. (2021) dealt with uncertainty measurement for interval set decision table. Rough set theory has been used in this paper. At first, similarity relation is found based on similarity degree. Then, from there, granular structure is defined, whose properties are investigated in respect of decision tables. Then, the uncertainty of this granular structure is found through interval approximation accuracy and interval approximation roughness through the application of rough set theory. This paper proposes an uncertainty measure based on conditional information entropy. The focus of the existing literature has also been made on handling of inconsistent decision table. For example, She et al. (2021) dealt with decision rules and multi-scale decision tables. This paper focuses its study on “generalization reducts for objects, decision rules and multi-scale decision tables”. Multi-Scale Decision Table (MSDT) had been proposed by Huang et al. (2021). Thuy et al. (2019) proposed a novel attribute reduction method based on “stripped quotient sets” and thus, proposed an algorithm called “SRED”. Two types of entropies have been considered – Shannon’s entropy and Liang’s entropy, called ‘ERED-SQS’. The application of Rough Set is significantly wide in case of decision tables as evident from the existing literature. For example, Huang et al. (2019) investigated optimal scale selection and rule acquisition by proposing two dominance-based rough sets. The authors applied the proposed idea on multi-scale intuitionistic fuzzy decision table. Some of the other significant research studies include the research studies of Wei et al. (2015). Another method to handle inconsistent decision table is through discernibility matrix. Many researchers proposed different research ideas toward this direction. For example, Liu et al. (2018) and Ge et al. (2017) investigated attribute reduction through discernibility matrix for inconsistent decision table. Some of the other research studies on attribute reduction include the research studies of Du and Hu (2016) and Liu et al. (2016). The recent existing literature has also investigated incomplete decision tables. Some of the research studies on incomplete decision tables include the research study of Zhao and Qin (2014).

3.5  APPLICATIONS OF DIFFERENT TECHNIQUES ON DECISION TABLES Existing literature shows varieties of applications of decision table. Alsolami et al. (2020) presented a list of theoretical applications of decision tables. The list includes the following: • • • • • •

Inhibitory Tree Other decision tree applications Dynamic Programming Pareto Optimal Solutions Bi-criteria optimization Multi-stage optimization

The applications of decision table in practical scenario is limited although, not small in number. Some of the applications of decision table are enlisted in Table 3.17.

35

Decision Table

TABLE 3.17 Some Applications of Decision Table Author and Year Azad et al. (2020) Li et al. (2020) Hodge et al. (2006) Zhang et al. (2008) Li and Du (2005) Yang et al. (2005) Zhou et al. (1996) Almustafa (2021)

Application Use of decision table in Knowledge representation Incremental learning with swarm decision table Use of decision table as classifier Application of decision table for classification of point sources for database applications Application of decision table for fuzzy logic control for Multi-area AGC system Application of decision table for development of an expert system for fault diagnosis Application of decision table in test sequencing and diagnosis in electronic system Prediction of chronic kidney disease

3.6 CONCLUSION This chapter has discussed various aspects of decision table. The purpose is not to detail the concept but to provide a brief overview and thus, to provide a glimpse of different aspects of decision table. Therefore, in Section 3.1, introduction to decision table, decision table has been defined, and various types of decision tables have been explained with suitable examples. Section 3.1 also presents brief history of decision table, overall structure, advantages, and disadvantages of decision table. Section 3.2 introduces the problem of inconsistent decision table. It mentions different methods to handle inconsistent decision table and depicts one of the proposed methods from the existing literature with detailed numerical example. Section 3.3 provides an overview of decision table languages. Rather than depicting one more decision table languages, this section provides glimpse of the features of decision table languages. Section 3.4 presents very brief overview of the latest trends in the research studies on decision table and thus, presents various modifications. Section 3.5 briefs various theoretical applications of decision table since there are very few practical applications of decision table observed as evident from the existing literature. The entire chapter, thus, presents an overall scenario of the area of decision table.

REFERENCES Almustafa, K. M. (2021). Prediction of chronic kidney disease using different classification algorithms. Informatics in Medicine Unlocked, 24, 100631. Alsolami, F., Azad, M., Chikalov, I., Moshkov, M. (2020). Decision and Inhibitory Trees and Rules for Decision Tables with Many-valued Decisions. Intelligent Systems Reference Library, Volume 156, Springer, Switzerland. Azad, M., Chikalov, I., Moshkov, M. (2020). Representation of knowledge by decision trees for decision tables with multiple decisions. Procedia Computer Science, 176, 653–659. Deng, S., Li, M., Guan, S., Chen, L. (2010). A new method of data reduction in inconsistent decision tables. In 2010 IEEE International Conference on Granular Computing, pp. 139–142. IEEE. Du, W. S., Hu, B. Q. (2016). Attribute reduction in ordered decision tables via evidence theory. Information Sciences, 364, 91–110. Ge, H., Li, L., Xu, Y., Yang, C. (2017). Quick general reduction algorithms for inconsistent decision tables. International Journal of Approximate Reasoning, 82, 56–80. Hodge, V. J., O’Keefe, S., Austin, J. (2006). A binary neural decision table classifier. Neuro Computing, 69(16–18), 1850–1859. Huang, B., Li, H., Feng, G., Zhou, X. (2019). Dominance-based rough sets in multi-scale intuitionistic fuzzy decision tables. Applied Mathematics and Computation, 38, 487–512. Huang, Z., Li, J., Dai, W., Lin, R. (2021). Generalized multi-scale decision tables with multi-scale decision attributes. International Journal of Approximate Reasoning, 15 (34), 1–15. Li, P., Du, X. (2005). Decision table looking up approach for fuzzy logic control of multi-area agc systems. IFAC Proceedings Volumes, 38(1), 279–284.

36

Decision Support System

Li, T., Fong, S., Wong, K. K., Wu, Y., Yang, X. S., Li, X. (2020). Fusing wearable and remote sensing data streams by fast incremental learning with swarm decision table for human activity recognition. Information Fusion, 60, 41–64. Liu, G., Hua, Z., Zou, J. (2016). A unified reduction algorithm based on invariant matrices for decision tables. Knowledge-Based Systems, 109, 84–89. Liu, G., Hua, Z., Zou, J. (2018). Local attribute reductions for decision tables. Information sciences, 422, 204–217. Metzner, J.R., Barnes, B.H. (1977). Decision Table Languages and Systems. Academic Press, Inc., New York. She, Y.-H., Qian, Z.-H., He, X.-L., Wang, J.-T., Qian, T., Zheng, W.-L. (2021). On generalization reducts in multi-scale decision tables. Information Sciences, 555, 104–124. Thuy, N.N., Wongthanavasu, S. (2019). On reduction of attributes in inconsistent decision tables based on information entropies and stripped quotient sets. Expert Systems with Applications, 137, 308–323. Wei, W., Wang, J., Liang, J., Mi, X., Dang, C. (2015). Compacted decision tables based attribute reduction. Knowledge-Based Systems, 86, 261–277. Yang, B. S., Lim, D. S., Tan, A. C. C. (2005). VIBEX: An expert system for vibration fault diagnosis of rotating machinery using decision tree and decision table. Expert Systems with Applications, 28(4), 735–742. Zhang, Y., Zhao, Y., Gao, D. (2008). Decision table for classifying point sources based on FIRST and 2MASS databases. Advances in Space Research, 41(12), 1949–1954. Zhang, Y., Jia, X., Tang, Z. (2021). Information-theoretic measures of uncertainty for interval-set decision tables. Information Sciences, 577, 81–104. Zhao, H., Qin, K. (2014). Mixed feature selection in incomplete decision table. Knowledge-Based Systems, 57, 181–190. Zhou, H., Qu, L., Li, A. (1996). Test sequencing and diagnosis in electronic system with decision table. Microelectronics Reliability, 36(9), 1167–1175.

4

Predicate Logic

4.1 INTRODUCTION Formal language can be thought of as a type of language that has syntax to express the meanings. This is also the difference between formal and natural languages. Predicate and propositional logic are parts of this formal language. Predicate logic, also known as first-order logic, was invented by Gottlob Frege (1848–1925). Predicate logic can be thought of as the property of an object or individual for a sentence in natural language. Such property can be expressed through the use of various logical symbols which will be listed in this section. Consider the following sentences (Table 4.1) and the respective predicate logic part for these sentences. Thus, predicate is a general expression which can represent a verb, proverb, verb and proverb, verb and noun, copula and adjective, and copula and noun as shown in Table 4.1. There are some rules in predicate logic that are used to express mathematical statements. These rules can differentiate between valid and invalid arguments. Thus, predicate logic is required to learn in order to know how to represent correct mathematical arguments. Objects in a sentence can be either singular or plural in number. Plural numbers can be represented by either a universal or existential quantifier, denoted by symbols ∀ and ∃ respectively. A universal quantifier ∀ represents “for all” and existential quantifier ∃ represents “for some”. Consider the following statements using a universal quantifier. All these sentences use universal quantifiers represented by the word “all” or by starting the sentence with plural nouns. All plants are alive. All cows give milk. All girls in the class wear skirts. All countries have their flags. Boys in this campus play cricket. Consider the following sentences for existential quantifiers. Some girls like Mathematics. There exist some umbrellas that are big. Some days this week are rainy. Some boys in school like to play football. There exist some story books on adventures. Here, “some girls like Mathematics” is the same saying “there exist some girls who like Mathematics”. However, in addition to providing mathematical reasoning, predicate and propositional logic have varieties of applications in Computer Science, such as designing computer circuits, designing computer programs, and so on. Both predicate logic and calculus and propositional logic are parts of Discrete

TABLE 4.1 Examples of Sentences and Predicate Logic Parts Sentence Peter lives in England Sam is a singer Michael loves painting Mary is a dancer Tom likes fish

DOI: 10.1201/9781003307655-4

Predicate Lives in England Is a singer Loves painting Is a dancer Likes fish

37

38

Decision Support System

Mathematics. Let the symbol “x” represent the object or entity that exists. Thus, “some girls like Mathematics” may be represented by the expression: ∃xP( x ) where, P( x ) represents “girls like Mathematics”. Here, some conjunctive symbols can also be used like AND operator represented by the conjunctive symbol ∧. For example, consider the statement “girls like Mathematics and boys like football”. Thus, consider the following: P( x ) : Girl likes Mathematics. Q( y) : Boy likes football. Thus, the above statement can be represented by: ∃x , y( P( x ) ∧ Q( y)) The symbols that are used in predicate logic are enlisted below: ∨: ∨: ∀: ∃: ¬: →: ⇔:

AND operator (Conjunctive symbol). OR operator (Disjunctive symbol). For all (universal operator). There exists or for some (existential operator). Negation operator. Implication operator. Equivalence operator.

In order to understand predicate and propositional logic, it is required to define proposition and propositional logic. A proposition is a statement that may be either true or false (Rosen, 1998). For example, consider the following statements. Observe that statements or propositions 1–3 and 5 are true whereas statements or propositions 4 and 6 are false.

1. 1 + 9 = 10 2. 6 – 2 = 4 3. Delhi is the capital of India. 4. Tokyo is the capital of China. 5. The taste of a mango is sweet. 6. Apple pie is sour.

Some statements are not propositions. Consider the following statements. Here, statement 1 is an instruction, statement 2 is a question, and no conclusion can be drawn for statements 3 and 4 since the values of x, Y, and z are not known. 1. Drop the ball. 2. Where do you live? 3. Y + 2 = 3 4. x/7 = z However, the above-mentioned operators can be applied to the propositions. Consider the following examples:

1. It is not the case that the ball is white. 2. Sam is tall and Jack is short. 3. Tomorrow will be either rainy or cloudy.

39

Predicate Logic

The above statements can be expressed through symbols. The symbolic form for statement 1 is: ¬P( x ) where P( x ) : Ball is white. The symbolic form of statement 2 is: P( x ) ∧ Q( x ) where, P( x ) : Sam is tall and Q( x ): Jack is short. P( x ) ∨ Q( x ) where P( x ) : Tomorrow is rainy and Q( x ): Tomorrow is cloudy. Thus, the negation of any statement can be made. For example, the negation of the statement “ P( x ) : Tomorrow is rainy” can be expressed as “¬P( x ): Tomorrow is not rainy”. Various logical values of a proposition can be expressed by a truth table. Thus, the truth table represents truth values of propositions. The truth table for various binary operators is shown in Tables 4.2–4.7 which TABLE 4.2 Truth Table for AND Operator X

Y

Result, X ∧ Y

T T F F

T F T F

T F F F

TABLE 4.3 Truth Table for OR Operator X

Y

Result, X ∨ Y

T T F F

T F T F

T T T F

TABLE 4.4 Truth Table for Negation Operator X

Result, ¬ X

T F

F T

TABLE 4.5 Truth Table for XOR Operator X

Y

Result, X ⊕ Y

T T F F

T F T F

F T T F

40

Decision Support System

TABLE 4.6 Truth Table for Implication X

Y

Result, X → Y

T T F F

T F T F

T F T T

TABLE 4.7 Truth Table for Equivalence X

Y

Result, X ⇔ Y

T T F F

T F T F

T F F T

show truth tales for AND operator, OR operator, negation operator, XOR operator, implication, and equivalence respectively. Table 4.2 shows that the result of AND operation is true only when both the inputs (X and Y) are true; Table 4.3 shows that the result of the OR operation is false only when both the inputs are false; Table 4.4 shows that the truth value of the output is just opposite to that of the input; Table 4.5 shows that the output is true for dissimilar inputs; Table 4.6 shows that the output is false when the first input is true and the second input is false; Table 4.7 shows that the output is false for dissimilar inputs. These logical operators are called connectives. The values – true and false – are known as Boolean values. Examples of each of these operators will clarify the concepts. Consider the following examples for AND binary operator. Observe that each of these examples contains a connective, “and”: • • • • •

Today is Monday and it is raining today. Garda is moody and Tom is whimsical. The capital of India is Delhi and the capital of Japan is Tokyo. This gun is a toy and Charles has a toy gun. Bob has a mouth organ and Ray has a drum.

These examples can be represented by AND operator. For example, if P( x ) represents “today is Monday” and Q( y) represents “it is raining today”, then P( x ) ∧ Q( y) represents “today is Monday and it is raining today”. Some examples of OR operator are given below: • • • • •

Either Monday will be rainy or Tuesday will be rainy. Charles will take either a backpack or trolley bag. Star Movies today will show either Fast and Furious – Part 5 or Castaway. Either Tom plays tennis or plays basketball. Chan will either purchase a black or white dress.

These examples can be represented by the OR operator. For example, if P( x ) represents “Monday will be rainy” and Q( y) represents “Tuesday will be raining”, then P( x ) ∨ Q( y) represents “either Monday will be rainy or Tuesday will be rainy”.

41

Predicate Logic

Some examples of negation operator are given below: • • • • •

Today is not Friday. Tintin is not a comic character. Titanic was not a ship. My name is not Susmita. Color of mustard seed is not red.

These examples can be represented by the negation operator. For example, if P( x ) represents “today is Friday”, then ¬P( x ) represents “today is not Friday”. Exclusive OR (XOR) operator can be represented by the following logical expression. P ⊕ Q = ( P ∧ ¬Q) ∨ (¬P ∧ Q) = ¬( P ∧ Q) ∧ ( P ∨ Q) (4.1)



Some examples of the implication operator are given below. Implication operator can be represented as: “if x then y” or “x implies y” “x is sufficient for y” “y if x” “if x, y” “y whenever x” “x only if y” “y is necessary for x” • • • • •

Cloudy weather today indicates that it will rain today. Cricket pitch today indicates that the players can throw fastball. Election will be held in March implying that we may get a new Chief Minister. The smell of cooking gas is coming indicating gas leakage. Sam loves Operations Research indicating that he likes Mathematics.

These examples can be represented by an implication operator. For example, if P( x ) represents “cloudy weather today”, and Q( y) represents “it will rain today”, then P( x ) → Q( y) represents “cloudy weather today indicates that it will rain today”. The equivalence P ⇔ Q is a bi-conditional one and it is true only when both P and Q are true. It can be interpreted as “P if and only if Q”. Some examples of equivalence operator are given below: • • • • •

Beans will come if and only if Nita comes. Jacky will receive grade A if and only if Jacky receives grade A in Mathematics and Logic. Rita will make noodles on Sunday if and only if you do not have any digestive problems. The class will be taken today if and only if the number of students is more than ten. I will purchase another cell phone if and only if my current phone becomes inactive.

The proposition, P → Q, has certain other implications. These are enlisted below: • The proposition Q → P is called converse of P → Q. • The proposition ¬Q → ¬P is called contrapositive to the proposition P → Q. For each of these logical expressions, a truth table can be formed. Consider the following seven different types of expressions. The respective truth tables for expressions 1–7 are provided in Tables 4.8–4.14, respectively.

42

Decision Support System

Expression 1: ( P → Q) → (Q → P) Expression 2: (¬P ⇔ ¬Q) ⇔ ( P ⇔ Q) Expression 3: ( P ⊕ Q) ∨ ( P ⊕ ¬Q) Expression 4: ( P → Q) ∧ (¬P → R) Expression 5: (( P ∧ ¬Q) ∨ (¬P ∨ ¬Q ∧ R)) → (¬P ∧ Q) Expression 6: (¬P ⇔ ¬Q) ⇔ (Q ⇔ R) Expression 7: (( P → Q) → R) → S TABLE 4.8 Truth Table for Expression ( P → Q) → (Q → P ) P

Q

( P → Q)

(Q → P )

( P → Q) → ( Q → P )

T T F F

T F T F

T F T T

T T F T

T T F T

TABLE 4.9 Truth Table for Expression ( ¬P ⇔ ¬Q) ⇔ ( P ⇔ Q) P

Q

¬P

¬Q

( ¬ P ⇔ ¬Q)

( P ⇔ Q)

( ¬ P ⇔ ¬Q) ⇔ ( P ⇔ Q)

T T F F

T F T F

F F T T

F T F T

T F F T

T F F T

T T T T

TABLE 4.10 Truth Table for Expression ( P ⊕ Q) ∨ ( P ⊕ ¬Q) P

Q

( P ⊕ Q)

¬Q

( P ⊕ ¬Q)

( P ⊕ Q) ∨ ( P ⊕ ¬Q)

T T F F

T F T F

F T T F

F T F T

T F F T

T T T T

TABLE 4.11 Truth Table for Expression ( P → Q) ∧ ( ¬ P → R) P

Q

R

( P → Q)

¬P

( ¬ P → R)

( P → Q ) ∧ ( ¬ P → R)

T T T T F F F F

T T F F T T F F

T F T F T F T F

T T F F T T T T

F F F F T T T T

T T T T T F T F

T T F F T F T F

43

Predicate Logic

TABLE 4.12 Truth Table for Expression ( P ∧ ¬Q) ∨ ( ¬ P ∨ Q ∧ R) → ( ¬ P ∧ Q) P

Q

R

T T T T F F F F

T T F F T T F F

T F T F T F T F

¬Q ¬ P ( ¬ P ∨ Q ∧ R) ( P ∧ ¬Q) ∨ ( ¬ P ∨ Q ∧ R) ( ¬ P ∧ Q) ( P ∧ ¬Q) ∨ ( ¬ P ∨ Q ∧ R) → ( ¬ P ∧ Q) F F T T F F T T

F F F F T T T T

F F T F T F T F

F F T T T F T F

F F F F T T F F

T T F F T T F T

TABLE 4.13 Truth Table for Expression ( ¬ P ⇔ ¬Q) ⇔ (Q ⇔ R) P

Q

R

¬P

¬Q

( ¬ P ⇔ ¬Q)

( Q ⇔ R)

( ¬ P ⇔ ¬Q) ⇔ ( Q ⇔ R)

T T T T F F F F

T T F F T T F F

T F T F T F T F

F F F F T T T T

F F T T F F T T

T T F F F F T T

T F F T T F F T

T F T F F T F T

TABLE 4.14 Truth Table for Expression (( P → Q) → R) → S P

Q

R

S

( P → Q)

(( P → Q) → R)

(( P → Q) → R) → S

T T T T T T T T F F F F F F F F

T T T T F F F F T T T T F F F F

T T F F T T F F T T F F T T F F

T F T F T F T F T F T F T F T F

T T T T F F F F T T T T T T T T

T T F F T T T T T T F F T T F F

T F T T T F T F T F T T T F T T

44

Decision Support System

Among these truth tables, the results of Tables 4.9 and 4.10 are all true. Thus, the respective logical expressions, (¬P ⇔ ¬Q) ⇔ ( P ⇔ Q) (for Table 4.9) and ( P ⊕ Q) ∨ ( P ⊕ ¬Q) (for Table 4.10) are called Tautology. Thus, it can be said that if the results of a logical expression are all true, then that expression is called Tautology. On the contrary, if the results of a logical expression are all zero (false), then that expression is called Contradiction. Thus, the expressions for Tables 4.8 and 4.11–4.14 are neither Tautology nor Contradiction. However, there can be various natures of a proposition. In addition to Tautology, these are: • • • • • • • • •

Tautology Contradiction Contingency Valid Invalid Falsifiable Unfalsifiable Satisfiable Unsatisfiable

Among these, Tautology has already been defined with examples. Contradiction is also a compound proposition that always gives false as output for any values of input. An example of Contradiction is shown in Table 4.15. A compound proposition is known as Contingency if it is neither a Tautology nor a Contradiction. Thus, Table 4.11 shows an example of Contingency. A compound proposition is valid only if it is a Tautology, whereas a compound proposition is known as invalid if it is not a Tautology. A compound proposition is falsifiable if it can be made false for some values of its propositional variables. A compound proposition is Unfalsifiable if it can never be made false for any values of its propositional variables. A compound proposition is satisfiable if it can be made true for some values of its propositional variables. A compound proposition is unsatisfiable if it cannot be made true for any value of its propositional variables. Therefore, the following points for all these types of propositions should be remembered: • • • • • • • •

All contradictions are invalid. All contradictions are falsifiable. All tautologies are valid. All tautologies are unfalsifiable. All contingencies are invalid. All contingencies are satisfiable. All contradictions are unsatisfiable. All contingencies are falsifiable.

The above can be proved for any logical expression either by a truth table as shown above or algebraically. Consider the expression ( P ∧ Q) ∧ ¬( P ∨ Q). This expression is a contradiction as shown in Table 4.15 since the results are all false (“F”). TABLE 4.15 Truth Table for Expression ( P ∧ Q) ∧ ¬( P ∨ Q) P

Q

( P ∧ Q)

( P ∨ Q)

¬ ( P ∨ Q)

( P ∧ Q) ∧ ¬ ( P ∨ Q)

T T F F

T F T F

T F F F

T T T F

F F F T

F F F F

45

Predicate Logic

However, before showing how to prove whether a logical expression is a tautology or contradiction, the concept of propositional equivalence is required to be clear. A part of the concepts of propositional equivalence is to understand tautology, contradiction, and contingencies. Two expressions are said to be logically equivalent if the expressions provide the same results. Consider the following expressions. The respective truth tables for these three logical expressions are shown in Tables 4.16–4.18 respectively. Table 4.16 shows that the results of ( P ⇔ Q) and ( P ∧ Q) ∨ (¬P ∧ ¬Q) are the same. Similar conclusions can be drawn and are observed in Tables 4.17 and 4.18. Therefore, all these three tables are showing logical equivalences. Expression 1: ( P ⇔ Q) and ( P ∧ Q) ∨ (¬P ∧ ¬Q) Expression 2: ¬( P ⊕ Q) and ( P ⇔ Q) Expression 3: ¬( P ⇔ Q) and (¬P ⇔ Q) Some important equivalences are shown in Table 4.19 (Prospesel, 1998). In addition to the above, it can be proved that P → Q and ¬P ∨ Q are equivalent. Using this equivalence and the above-listed logical equivalences in Table 4.19, it can be proved algebraically that the following expressions are tautology or contradiction. The expressions and respective algebraic proofs are given below.

TABLE 4.16 Truth Table for Expressions ( P ⇔ Q) and ( P ∧ Q) ∨ ( ¬ P ∧ ¬Q) P

Q

( P ⇔ Q)

¬P

¬Q

( ¬ P ∧ ¬Q)

( P ∧ Q)

( P ∧ Q) ∨ ( ¬ P ∧ ¬Q)

T T F F

T F T F

T F F T

F F T T

F T F T

F F F T

T F F F

T F F T

TABLE 4.17 Truth Table for Expressions ¬( P ⊕ Q) and ( P ⇔ Q) P

Q

( P ⊕ Q)

¬ ( P ⊕ Q)

( P ⇔ Q)

T T F F

T F T F

F T T F

T F F T

T F F T

TABLE 4.18 Truth Table for Expressions ¬( P ⇔ Q) and ( ¬ P ⇔ Q) P

Q

( P ⇔ Q)

¬ ( P ⇔ Q)

¬P

( ¬ P ⇔ Q)

T T F F

T F T F

T F F T

F T T F

F F T T

F T T F

46

Decision Support System

TABLE 4.19 Important Logical Equivalences P∧T ⇔ P

Identity law

P∨F ⇔ P P∨T ⇔ T

Law of domination

P∧F ⇔ F P∨P ⇔ P

Idempotent law

P∧P ⇔ P ¬(¬P) ⇔ P

Double negation law

P ∨Q ⇔ Q ∨ P

Commutative law

P ∧Q ⇔ Q ∧ P ( P ∨ Q) ∨ R ⇔ P ∨ (Q ∨ R)

Associative law

( P ∧ Q) ∧ R ⇔ P ∧ (Q ∧ R) P ∨ (Q ∧ R) ⇔ ( P ∨ Q) ∧ ( P ∨ R)

Distributive law

P ∧ (Q ∨ R) ⇔ ( P ∧ Q) ∨ ( P ∧ R) ¬(P ∧ Q) ⇔ ¬P ∨ ¬Q

De Morgan’s law

¬( P ∨ Q) ⇔ ¬P ∨ ¬Q

Example 1: Prove that the expression ( P ∧ Q) ∧ ¬( P ∨ Q) is a contradiction. The respective proof is provided below. The proof shows that the expression is a contradiction.



( P ∧ Q ) ∧ ¬( P ∨ Q )



= ( P ∧ Q) ∧ (¬P ∧ ¬Q)  (By De Morgan’s law)



= P ∧ Q ∧ ¬P ∧ ¬Q



= ( P ∧ ¬P) ∧ (Q ∧ ¬Q)



= F ∧ F   (By law of domination)



= F   (By idempotent law)

Example 2: Prove that the expression ( P ∧ Q) → ( P ∨ Q) is a tautology. The respective proof is provided below. The proof shows that the expression is a tautology.



( P ∧ Q) → ( P ∨ Q)



= ¬( P ∧ Q) ∨ ( P ∨ Q)  (Since P → Q and ¬P ∨ Q are equivalent as mentioned before)



= ¬P ∨ ¬Q ∨ P ∨ Q  (By De Morgan’s law)

47

Predicate Logic



= ( P ∨ ¬P) ∨ (Q ∨ ¬Q)



= T ∨ T   (By law of domination)



= T   (By idempotent law)

Example 3: Prove that the expression [¬P ∧ ( P ∨ Q)] → Q is a tautology. The respective proof is provided below. The proof shows that the expression is a tautology.



[¬P ∧ ( P ∨ Q)] → Q



= [(¬P ∧ P) ∨ (¬P ∧ Q)] → Q   (By distributive law)



= [ F ∨ (¬P ∧ Q)] → Q   (By law of dominance)



= (¬P ∧ Q) → Q   (By law of dominance)



= ¬(¬P ∧ Q) ∨ Q   (Since P → Q and ¬P ∨ Q are equivalent as mentioned before)



= ¬(¬P) ∨ ¬Q ∨ Q  (By De Morgan’s law)



= P ∨ (Q ∨ ¬Q)  (By double negation law)



= P ∨ T   (By law of dominance)



= T   (By law of dominance) These logical equivalences and the overall predicate logic can be used to express/translate English language sentences through logical expressions. Consider the following examples.

Example 4: Some students like Operations Research. If they like Operations Research, then they like Mathematics.

Answer: Suppose, P( x ) : Student likes Operations Research. Q( x ): Student likes Mathematics. Thus, the above example can be expressed by the following logical expression. The expression has used an existential quantifier. ∃xP( x ) ∧ [ P( x ) → Q( x )]

Example 5:

All students have scored above 70% marks. Some of them do not like Statistics.

Answer: Suppose, P( x ) : Student has scored above 70% marks. Q( x ): Student likes Statistics.

48

Decision Support System Thus, the above example can be expressed by the following logical expression. The expression has used both universal and existential quantifiers. ∀xP( x ) ∧ ∃x (¬Q( x ))

Example 6:

Tomorrow will be either rainy or sunny. If it is rainy, then Tom will go to a nearby cinema hall, and if it is sunny, then Tom will travel to a nearby park.

Answer: Suppose, P( x ) : Tomorrow is rainy. Q( x ): Tomorrow is sunny. Thus, the above example can be expressed by the following logical expression:



( P( x ) ∨ Q( x )) ∧ [( P( x ) → Q( y)) ∨ (Q( x ) → R( y))]

Example 7: Harry will have beef if his stomach is all right. Otherwise, Harry will have rice.

Answer: Suppose, P( x ) : Harry will have beef. Q( y): Stomach is all right. Thus, the above example can be expressed by the following logical expression:



(Q( y) → P( x )) ∨ (¬Q( y) → R( x ))

Example 8: Shannon will play Basketball and Mira will play Tennis if Tom does not play Basketball and Ray does not play Tennis.

Answer: Suppose, P( x ) : Shannon will play Basketball. Q( y): Mira will play Tennis.



(¬P( z ) ∧ ¬Q(m)) → ( P( x ) ∧ Q( y)) where, x represents “Shannon”, “y” represents “Mira”, “z” represents “Tom”, and “m” represents “Ray”.

If negation is applied to universal or existential quantifiers, then the result will be as shown below: ¬∃xP( x ) is equivalent to ∀x¬P( x ) ¬∀xP( x ) is equivalent to ∃x¬P( x ) The conceptual meaning can be realized as – P(x) is true not for some values of x – is the same as saying P(x) is not true for all values of x. Mathematically, it can be expressed as:

¬∃xP( x ) ⇔ ∀x¬P( x ) ¬∀xP( x ) ⇔ ∃x¬P( x )

Predicate Logic

49

However, Keisler and Robbin (1996) differentiated between pure predicate logic and full predicate logic. But the most important issue here is to understand and apply the predicate and propositional logic and calculus. However, this chapter does not emphasize pure and full predicate logic since this is not the focus of this chapter and the book as a whole. Now, it is required to verify whether a logical expression is a formula. A formula must contain a connective such as, ∧, ∨, →, ⇔, ¬ (Goldrei, 2005). The easiest way to verify whether a logical expression is a formula is to construct a tree diagram. Consider the following expressions. The respective tree diagrams are shown for each of these expressions. The respective tree diagrams are shown in Figures 4.1–4.3 respectively. Figures 4.1–4.3 along with the three expressions show clearly that the brackets in the expressions are important in order to differentiate parts of the expression and the main connective for an expression. Expression 1: ¬(¬P ∨ Q) Expression 2: (( P ∧ Q) → (¬P ⇔ Q)) Expression 3: ¬((¬P ∧ ( P ∨ ¬Q)) ⇔ ¬R

FIGURE 4.1  Tree diagram for expression ¬(¬P ∨ Q).

FIGURE 4.2  Tree diagram for expression (( P ∧ Q) → (¬P ⇔ Q)) .

FIGURE 4.3  Tree diagram for expression ¬((¬P ∧ ( P ∨ ¬Q)) ⇔ ¬R.

50

Decision Support System

Each of the branches in these tree diagrams represents a subformula. Thus, the subformulas for Figure 4.3 are – ¬((¬P ∧ ( P ∨ ¬Q)), ¬R, R, ((¬P ∧ ( P ∨ ¬Q)) , ¬P, P, P ∨ ¬Q , ¬Q, and Q. However, by the usual rule of Mathematics, the number of left brackets must be equal to the number of right brackets in order to make a valid formula. The formula may either be short or long. The length of a formula can be measured in terms of the number of symbols used or the length of the largest branch of its equivalent tree diagram. The most preferred way is to find the number of connectives used in a formula. Thus, in terms of the number of connectives, the length of the tree in Figures 4.1–4.3 is3 (connectives used: ¬, ¬, and ∨), 4 (connectives used: ∧, →, ¬, and ⇔), and 7 (connectives used: ¬, ¬, ∧, ∨, ¬, ⇔, and ¬) respectively. However, other names of predicate calculus are predicate logic, first-order logic, elementary logic, restricted predicate calculus, restricted functional calculus, relational calculus, theory of quantification, theory of quantification with quality, and so on. It has a significant contribution to the field of Computer Science. Predicate calculus can be thought of as a generalized term for propositional calculus. Predicate calculus can be used to check the correctness of a program fragment. However, the following section shows various advancements in the field of predicate and propositional logic or calculus.

4.2 LATEST RESEARCH STUDIES ON PREDICATE AND PROPOSITIONAL LOGIC Console et al. (2022) applied predicate and propositional logic to deal with incomplete information in a relational database. In this paper, different partial knowledge has been represented by different propositions. Wang et al. (2009) applied first-order predicate logic to develop a process plan for complex parts in a manufacturing process. Simulation experimentation had been performed in order to verify the effectiveness of the proposed approach which proved to be effective. Ikram and Qamar (2015) applied predicate logic to predict an earthquake based on historical data. The basic target was to overcome uncertainties in the prediction of source initiation, rapture phenomenon, and accuracy of earthquake prediction. The authors developed an expert system for the purpose. Treur (2009) applied temporal predicate logic in developing a restricted, executable modeling language format for temporal specifications along with a simulation. Yang et al. (2004) compared three techniques as knowledge representation techniques – first-order predicate logic, fuzzy logic, and nonmonotonic logic. The comparison had been performed based on five different aspects of knowledge representation – conceptualization, transfer, modification, integration, and decomposition. These five properties of knowledge representation had been used to perform the aforesaid comparison in terms of the correctness, complexity, and completeness of these three methods. Hájek (2009) performed a detailed survey on the complexity of fuzzy predicate logic. Groote and Tveretina (2003) proposed a modified version of a binary decision diagram for representing formulas in first-order predicate logic. Lunze and Schiller (1993) applied predicate logic to fault diagnosis problems. The proposed method was based on a qualitative model of the dynamic system. The reasons for the previous faults were recorded and those records were used to deal with the future fault diagnosis problems to be dealt with predicate logic. Some other significant research studies are listed in Table 4.20. The list also shows various applications of predicate logic in different fields of study.

4.3 CONCLUSION This is a brief chapter on Predicate Logic. Because of this reason, the Introduction section is significant and contains an overview of Predicate Logic which covers almost all the basic concepts. This introduction is followed by the latest trend in the area of study.

51

Predicate Logic

TABLE 4.20 Some Research Studies on Predicate Logic Authors and Year Kannapan and Marshek (1990) Bhushan et al. (2021)

Ott et al. (2018) Creignou et al. (2014) Quispe-Cruz et al. (2016)

Hartmann and Link (2008)

Li and Zhou (2012) Cravo (2010)

Efstathiou and Hunter (2011)

Bedregal and Cruz (2006) Mateus and Sernadas (2006) Savinov (1993)

Contribution Authors applied algebraic and predicate logic for machine design. The proposed method was applied to design verification, selection, and synthesis. Authors had applied first-order predicate logic for classifying various software product line redundancies. The ontological first-order logic had been applied in a topological form. The proposed method is a two-step method. Authors investigated the effect of multiple representations on various participants. A total of 146 different participants were taken for the questionnaire. Authors have investigated the belief revision within the fragments of propositional logic. Traditional proofs for propositional logic may be quite large. The authors have proposed strong normalization theory for the proofs of propositional logic. The authors have proposed to use proof graphs. The authors have investigated between Boolean and multivalued dependencies in relational databases. The proposed approach had helped to characterize data dependencies in nested databases. Authors had proposed a quantitative method based on Godel’s logic system reasoning theory for propositional logic. Author had applied propositional logic in order to identify natural flow in workflow. The basic purpose was to investigate the structure of workflows. Authors had also proposed necessary and sufficient conditions for the termination of workflows. Logic-based argument has the syntax (φ ,α ) where φ is the minimal subset of knowledge and α is the claim. Authors have investigated different ways for formalizing arguments and counterarguments. Authors had proposed a fuzzy orientation of propositional logic. They had investigated fuzzy interpretation of various connectives. Authors had proposed axiomatization for exogenous quantum propositional logic. Author had proposed, for the first time, fuzzy propositional logic. Fuzzy propositional logic had considered fuzzy propositional variables and explained the interpretations and inferences in a completely different way from that of an ordinary propositional logic.

REFERENCES Bedregal, B. R. C., Cruz, A. P. (2006). Propositional logic as a propositional fuzzy logic. Electronic Notes in Theoretical Computer Science, 143, 5–12. Bhushan, M., Duarte, J. Á. G., Samant, P., Kumar, A., Negi, A. (2021). Classifying and resolving software product line redundancies using an ontological first-order logic rule based method. Expert Systems with Applications, 168, 114167. Cravo, G. (2010). Applications of propositional logic to workflow analysis. Applied mathematics letters, 23(3), 272–276. Creignou, N., Papini, O., Pichler, R., Woltran, S. (2014). Belief revision within fragments of propositional logic. Journal of Computer and System Sciences, 80(2), 427–449. Console, M., Guagliardo, P., Libkin, L. (2022). Propositional and predicate logics of incomplete information. Artificial Intelligence, 302, 103603. Efstathiou, V., Hunter, A. (2011). Algorithms for generating arguments and counterarguments in propositional logic. International Journal of Approximate Reasoning, 52(6), 672–704. Goldrei, D. (2005). Propositional and Predicate Calculus: A Model of Argument. Springer, USA.

52

Decision Support System

Groote, J. F., Tveretina, O. (2003). Binary decision diagrams for first-order predicate logic. The Journal of Logic and Algebraic Programming, 57(1–2), 1–22. Hájek, P. (2009). Arithmetical complexity of fuzzy predicate logics—a survey II. Annals of Pure and Applied Logic, 161(2), 212–219. Hartmann, S., Link, S. (2008). Characterising nested database dependencies by fragments of propositional logic. Annals of Pure and Applied Logic, 152(1–3), 84–106. Ikram, A., Qamar, U. (2015). Developing an expert system based on association rules and predicate logic for earthquake prediction. Knowledge-Based Systems, 75, 87–103. Kannapan, S. M., Marshek, K. M. (1990). An algebraic and predicate logic approach to representation and reasoning in machine design. Mechanism and Machine Theory, 25(3), 335–353. Keisler, H. J., Robbin, J. (1996). Mathematical Logic and Computability (International Series in Pure & Applied Mathematics). Mc-Graw Hill Education (ISE edition), India. Li, J., Zhou, Y. (2012). A Quantitative method of n-valued Gödel propositional logic. Procedia Environmental Sciences, 12, 583–589. Lunze, J., Schiller, F. (1993). Fault diagnosis by means of a predicate logic description of the dynamical process. IFAC Proceedings Volumes, 26(2), 569–572. Mateus, P., Sernadas, A. (2006). Weakly complete axiomatization of exogenous quantum propositional logic. Information and Computation, 204(5), 771–794. Ott, N., Brünken, R., Vogel, M., Malone, S. (2018). Multiple symbolic representations: The combination of formula and text supports problem solving in the mathematical field of propositional logic. Learning and Instruction, 58, 88–105. Prospesel, H. (1998). Introduction to Logic: Propositional Logic, 3rd Edition. Prentice Hall, NJ. Quispe-Cruz, M., Haeusler, E., Gordeev, L. (2016). On strong normalization in proof-graphs for propositional logic. Electronic Notes in Theoretical Computer Science, 323, 181–196. Rosen, K. H. (1998). Discrete Mathematics and Its Applications. McGraw Hill, China. Savinov, A. A. (1993). Fuzzy propositional logic. Fuzzy Sets and Systems, 60(1), 9–17. Treur, J. (2009). Past–future separation and normal forms in temporal predicate logic specifications. Journal of Algorithms, 64(2–3), 106–124. Wang, Z., Du, P., Yu, Y. (2009). An intelligent modeling and analysis method of manufacturing process using the first-order predicate logic. Computers & Industrial Engineering, 56(4), 1559–1565. Yang, K. H., Olson, D., Kim, J. (2004). Comparison of first order predicate logic, fuzzy logic and non-monotonic logic as knowledge representation methodology. Expert Systems with Applications, 27(4), 501–519.

5

Fuzzy Theory and Fuzzy Logic

5.1  BASIC CONCEPTS Fuzzy theory is a widely applied concept to deal with uncertainty. The very first method to deal with uncertainty is undoubtedly the theory of probability. The concept of fuzzy theory appeared much later. Over several decades, fuzzy orientation has become very popular and applicable in various research studies and practical applications. While discussing about Fuzzy Theory, the first concept that should be discussed is classical Set Theory. Basically, a set is a collection of similar types of objects or entities. A very simple example of a set is given below. A = {1,2,3,4,5,6,7,8,9,10} (5.1)



A member of a set is said to belong to that set. For example, elements 1, 2, …, 10 all belong to set A and are represented by, 6 ∈ A, for instance. If an element does not belong to a set, then that is represented by, 11 ∉ A, meaning that element “11” does not belong to set A. A subset is a part of an entire set and is represented by B ⊂ A. The subset can be of two types – normal and proper subsets. The difference between these two is that a subset of a set can be equal to that set, but a proper (or strict) subset of a set can never be equal to that set. Normal and proper subsets can be represented by B ⊆ A and B ⊂ A, respectively. For example, consider sets B, C, and D as given by (expressions 5.2–5.4), respectively. In this case, it can be inferred, based on the definitions – B ⊂ A, C ⊆ A and D ⊇ A. The symbol D ⊇ A means that the set D is a superset of A.

B = {4,5,6,7} (5.2)



C = {1,2,3,4,5,6,7,8,9,10} (5.3)



D = {−1,0,1,2,3,4,5,6,7,8,9,10,11,12,13,20} (5.4)

These sets are called crisp sets in which the vagueness in the languages to describe some characteristics of entities cannot be represented. Various set operations can be applied on a crisp set, such as union, intersection, and complement. These are explained by the examples given below. Consider two sets P and Q as shown below. The union of sets P and Q consists of all elements from both sets. The intersection of sets P and Q consists of only the common elements of these two sets. A complement is found on the basis of a universal set. Consider the sets as shown in (expressions 5.1 and 5.2). The complement of B as given in (expression 5.2) is shown in (expression 5.9). The union and intersection of the sets as shown in (expressions 5.5 and 5.6) are shown in (expressions 5.7 and 5.8), respectively.

P = {Simba, Tom, Susmita, Ray, Henrieta, Harry, John, Sam, Chang} (5.5)



Q = {Susmita, Sam, Tom, Demy, Julia, Monalisa} (5.6)



R = P ∪ Q = {Simba, Tom, Susmita, Ray, Henrieta, Harry, John, Sam,

DOI: 10.1201/9781003307655-5

Chang, Demy, Julia, Monalisa} (5.7) 53

54

Decision Support System



R = P ∩ Q = {Susmita, Sam, Tom} (5.8)



B = {1, 2, 3, 8, 9, 10} (5.9)



A = {x 0 < x ≤ 10 and x is even number} = {2, 4, 6, 8, 10} (5.10)



X A : X → {0,1} (5.11)

Various properties of crisp sets are given in Table 5.1. Here, U and φ represent universal and null sets (with no element), respectively. A set can also be represented through defining some conditions. An example is shown in (expression 5.10). A crisp set can be defined through a characteristic function as shown in (expression 5.11). Just like the characteristic function which represents a crisp set, a membership function defines a fuzzy set. The fuzzy set was first proposed by Dr. Latfi Zadeh in 1965 (Zadeh, 1965). He published a paper named “Fuzzy Sets” in the journal Information and Control. However, it first came into use when Ebraham Mamdani of the University of London applied Fuzzy Logic for the control of a stream engine in 1974 (Mamdani, 1974). However, before that, Zadeh (1973) himself applied his own theory on the complex decision processes in 1973 (Zadeh, 1973). In order to understand the relevance of the use of the Fuzzy Theory, consider a linguistic example related to the height of men which can be tall, short, or medium height. The definition of tall, medium height, and short differs among various countries, states, and regions. Thus, a particular value cannot define the linguistic term “tall”, for example. While defining such a concept, some percentage or fraction could be added to convey how much percentage, the linguistic “tall” is certain. Such representation can be expressed through Fuzzy Set. The related theory is called Fuzzy Theory.

TABLE 5.1 Characteristics of Crisp Sets Property

Mathematical Representation

Idempotent law

A∪ A = A A∩ A = A

Commutative law

A∪ B = B∪ A A∩ B = B∩ A

Associative law

A ∪ ( B ∪ C ) = ( A ∪ B) ∪ C A ∩ ( B ∩ C ) = ( A ∩ B) ∩ C

Distributive law

A ∪ ( B ∩ C ) = ( A ∪ B) ∩ ( A ∪ C ) A ∩ ( B ∪ C ) = ( A ∩ B) ∪ ( A ∩ C )

Law of double negation

A= A

De Morgan’s law

A∪ B = A∩ B A∩ B = A∪ B

Law of contradiction

A∩ A =φ

Law of excluded middle

A∪ A =U

55

Fuzzy Theory and Fuzzy Logic

TABLE 5.2 Degree of “Tall” and “Moderate Height” Set

Tom

Harry

Ray

Moni

Bentinc

Suzi

Tall Moderate height

0.3 0.8

0.9 0.2

0.7 0.6

0.8 0.8

0.6 0.4

0.1 0.9

However, the degree of tallness can vary from tall, moderately tall to very tall, as an instance, since the concept of tallness varies across different places. The degree “0” means that the person is not tall at all and the degree of “1” means that the person belongs completely to the set of “tall” people. Thus, the sets can be taken as “tall” and “moderate height”, and instances of various degrees for six different persons are shown in Table 5.2. In the crisp set, the characteristic function for each member has a value of either 0 or 1. But the membership function in fuzzy is expressed in fractions or percentages. This is the basic difference between crisp and fuzzy sets. Thus, the membership function for a fuzzy set can be defined by expression (5.12).

µ A : X → {0,1} (5.12)



Here, µ A ( x ) is the degree of membership for fuzzy set A or the grade of membership for x ∈ X . The membership value indicates the degree of x belonging to the fuzzy set A (Tanaka, 1991). The difference between the characteristic and member functions can be depicted through an example. Consider the example of height as stated above. Suppose, the heights of four persons are: • • • •

80 inches – for Mr. X 73 inches – for Mr. Y 70 inches – for Mr. Z 60 inches – for Mr. K

The heights of these four persons are depicted in Figure 5.1 and Table 5.3. Here, Mr. X and Mr. Y fall in “the category of a ‘tall’ person; Mr. Z has medium height; Mr. K is short. However, this particular decision may be dissatisfactory to Mr. Z since his category is ‘medium’ height” whereas the difference between Mr. Y, tall person, and Mr. Z, medium height person is only 3 inches. This dissatisfaction can be lessened to some extent through the consideration of a fuzzy concept. Table 5.4 and Figure 5.2 show that the grade for Mr. Y to belong to the “Tall” category is 0.7 and that to belong to the “Medium height” category is 0.3. Similarly, the grade for Mr. Z to belong to categories “Medium height” and “Tall” are 0.6 and 0.4, respectively. Similar explanations are applicable to other two instances. The membership functions have automatically changed to Figure 5.2 as a result of fuzzy consideration. Thus, a fuzzy set can be thought of as an extension of a crisp set.

FIGURE 5.1  Crisp representation of height example.

56

Decision Support System

TABLE 5.3 Values of Characteristic Function for Height of Four Persons

Mr. X Mr. Y Mr. Z Mr. K

Height (inches)

Short

Medium Height

Tall

80  73  70  60 

0 0 0 1

0 0 1 0

1 1 0 0

TABLE 5.4 Values of Membership Function for Height of Four Persons

Mr. X Mr. Y Mr. Z Mr. K

Height (inches)

Short

Medium Height

Tall

80  73  70  60 

0 0 0 0.8

0.1 0.3 0.6 0.2

0.9 0.7 0.4 0

FIGURE 5.2  Fuzzy representation of height example.

A fuzzy set A on universe U can be represented as shown in (expression 5.13). If U is an infinite set, then the fuzzy set A in U can be shown through expression (5.14).

A = µ A (u1 ) / u1 + µ A (u2 ) / u2 + ... + µ A (un ) / un =



∑ µ (u ) / u A

i

i

(5.13)

i



A=

∫ µ (u ) / u = ∫ A

U

i

(Membership function/ Element variable) (5.14)

i

Universe

Fuzzy sets can be of various types such as triangular fuzzy, trapezoidal fuzzy, and exponential fuzzy sets. Figures 5.3–5.5 represent the triangular, trapezoidal, and exponential sets respectively. Expressions (5.15–5.17) represent the triangular, trapezoidal, and exponential sets respectively. 0



A=



−3

3+ x /x+ 2

3

∫ 0

3− x / x (5.15) 2

57

Fuzzy Theory and Fuzzy Logic

FIGURE 5.3  Triangular fuzzy set.

FIGURE 5.4  Trapezoidal fuzzy set.

FIGURE 5.5  Exponential fuzzy set. −4



A=



−5



5+ x /x+ 2 A=



4

1 + x

5

∫ ∫

−4

e −0.5( x − 4)

2

4

/x

5− x / x (5.16) 2 (5.17)

x

A fuzzy can be normal or convex. A fuzzy set is said to be normal if expression (5.18) is true. A fuzzy set is said to be convex if expression (5.19) is true. Expression (5.19) means that, for all x1 , x 2 , and λ , the respective mentioned expression is true if a fuzzy set is convex. The cardinality of a fuzzy set A is defined as shown in expression (5.20) which expresses the sum of the membership function values. The relative cardinality can be defined by expression (5.21).

max µ A ( x ) = 1 (5.18) x ∈U



∀x1 , x 2 ∈U and ∀λ ∈[0,1], µ A (λ x1 + (1 − λ ) x 2 ) ≥ min( µ A ( x1 ), µ A ( x 2 )) (5.19)



A=

∑ µ (x) (5.20) A

x ∈U



A =

A (5.21) U

58

Decision Support System

Just like crisp set operations, the set operations, union, intersection, and complement operations can also be applied to fuzzy sets. Consider two fuzzy sets P and Q. The membership function for the union of these two fuzzy sets is given by expression (5.22). The intersection of these two fuzzy sets is given in expression (5.23) and the complement of the fuzzy set P is given by expression (5.24). The union and intersection of a fuzzy set can be represented graphically as shown in Figures 5.6 and 5.7 respectively.



  µP∪Q ( x ) = µP ( x ) ∨ µQ ( x ) =   



  µP∩Q ( x ) = µP ( x ) ∧ µQ ( x ) =   



µP ( x ),

µP ( x ) ≥ µQ ( x )

µQ ( x ),

µP ( x ) < µQ ( x )

µP ( x ),

µP ( x ) ≤ µQ ( x )

µQ ( x ),

µP ( x ) > µQ ( x )

(5.22)

(5.23)

µP ( x ) = 1 − µP ( x ) (5.24)

Consider that the two fuzzy sets P and Q are given by (expressions 5.25 and 5.26) respectively. The union and intersection of these two sets are shown in (expressions 5.27 and 5.28) respectively. However, fuzzy sets also have similar properties like crisp sets with very few differences. The properties of fuzzy sets are enlisted in Table 5.5. Suppose, P, Q, and R are fuzzy sets in the universe U.

P = {0,0.2,0.9,0.4,0.6} (5.25)



Q = {0.1,0.4,0.8,0.6,0.7} (5.26)



P ∪ Q = {0,0.1,0.2,0.4,0.6,0.7,0.8,0.9} (5.27)



P ∩ Q = {0.4,0.6} (5.28)

FIGURE 5.6  Union of two fuzzy sets P and Q.

FIGURE 5.7  Intersection of two fuzzy sets P and Q.

59

Fuzzy Theory and Fuzzy Logic

TABLE 5.5 Properties of Fuzzy Sets Property

Mathematical Representation P∪P = P

Idempotent law

P∩P = P P ∪Q = Q ∪ P

Commutative law

P ∩Q = Q ∩ P P ∪ (Q ∪ R) = ( P ∪ Q) ∪ R

Associative law

P ∩ (Q ∩ R) = ( P ∩ Q) ∩ R P ∪ (Q ∩ R) = ( P ∪ Q) ∩ ( P ∪ R)

Distributive law

P ∩ (Q ∪ R) = ( P ∩ Q) ∪ ( P ∩ R) Law of double negation De Morgan’s law Law of excluded middle Law of contradiction

P=P Not applicable to the Fuzzy set, in general P∪P ≠U P ∩ P ≠ φ where φ is null or empty set

Fuzzy sets P and Q are said to be equal if the membership function values are equal as shown in expression (5.29). The subset has a similarity with that of a crisp set. A fuzzy set P is a subset of another fuzzy set Q if the membership function value for P is less than or equal to that for fuzzy set Q. This is shown in expression (5.30). Another important concept related to fuzzy sets is strong α -cut and weak α -cut. Strong α -cut on a fuzzy set can be defined as those elements of P for which the membership function values are greater than α . If it is greater than or equal to α , then that is called a weak α -cut. Strong and weak α -cuts are shown in expressions (5.31 and 5.32) respectively where α ∈[0,1). For example, consider the fuzzy set P as shown in (expression 5.33). The strong cut P0.2 and the weak cut P 0.2 are given in (expressions 5.34 and 5.35) respectively. This membership function related to fuzzy set P can be decomposed into an infinite number of rectangular membership functions (α ∧ χ Aα or α ∧ χ Aα ). If these rectangular membership functions are aggregated, then the original fuzzy set P can be obtained. This is known as “Decomposition Principle”. For example, based on the fuzzy set as shown in (expression 5.33), different strong α -cuts are shown in (expression 5.36). Now, if these cuts are aggregated, then the original fuzzy set P can be obtained.

µP ( x ) = µQ ( x ) ⇔ P = Q (5.29)



µP ( x ) ≤ µQ ( x ) ⇔ P ⊂ Q (5.30)



Pα = { x µ P ( x ) > α } (5.31)



Pα = { x µ P ( x ) ≥ α } (5.32)



P = 0.2/3 + 0.4/4 + 0.5/6 + 0.7/8 + 0.8/10 (5.33)



P0.2 = {4,6,8,10} (5.34)

60

Decision Support System

P 0.2= {3,4,6,8,10} (5.35)



P0.2 = {4,6,8,10} P0.3 = P0.4 = {6,8,10} (5.36) P0.5 = {8,10}



P0.6 = P0.7 = {10} Based on fuzzy set P, the fuzzy number can be defined. A fuzzy set can be called a fuzzy number if the following conditions are satisfied. • P is a convex set. • µ P ( x 0 ) = 1 for only one element x 0. • The membership function µ P ( x ) is continuous. However, there is a significant amount of other concepts related to fuzzy logic and fuzzy theory. Various applications as evident in the existing literature have seen the concept of Fuzzification and Defuzzification. Therefore, the following section depicts the concepts of Fuzzification and Defuzzification with specific examples.

5.2  FUZZIFICATION AND DEFUZZIFICATION This section needs the mention about a fuzzy inference system that requires a fuzzy set as input and results in fuzzy output. In many cases, in practical decision-making, the output is uncertain in nature. In those situations, in order to get appropriate results, this particular concept is applicable. A single-input inference system can be depicted in Figure 5.8 (Bede, 2013) which can be termed as a fuzzy controller. There are different kinds of fuzzifier based on the type of the fuzzy concept. For example, a type of fuzzifier is canonical inclusion. This is the most basic fuzzifier. The characteristic function for a singleton fuzzy set x 0 ∈U can be defined in expression (5.37). If there is uncertainty in the crisp input, then that is also considered at this stage for a non-singleton fuzzy input (Bede, 2013). Thus, this is what a fuzzifier does. The fuzzifier converts the crisp input to fuzzy input. Now, this fuzzy input is fed to the inference system. The fuzzy inference system is based on a certain fuzzy rule base. The rule base can be regarded as fuzzy relation R( p, q) .  1 if x = x 0  P′( x ) = χ{x 0}( x ) =  (5.37)  0 if x ≠ x 0 



CRISP INPUT

FUZZIFICATION BY FUZZIFIER

INFERENCE SYSTEM

RULE BASE

FIGURE 5.8  Fuzzy controller.

DEFUZZIFICATION BY DEFUZZIFIER

CRISP OUTPUT

61

Fuzzy Theory and Fuzzy Logic

There are different kinds of fuzzy inference systems. One of its kinds is the Mamdani inference system which can be expressed by expression (5.38). Here, R is the fuzzy relation which is the rule base. After the inference system processes the fuzzy input, the next step is to convert it back into the crisp form. This is done by a defuzzifier. There are different types of defuzzifiers but in gist, the type depends on the type of the fuzzifier. For example, Center of Gravity, Center of Area, Mean of Maxima (MOM), Expected Value and Expected Interval, Maximum Criterion, and so on. For example, MOM is given by expression (5.39). Here, U is given by expression (5.40). Q′( y) = ( R  P′)( x ) =



∨ P′( x ) ∧ R( p, q) (5.38) x ∈U

MOM (u) =



∫ x dx (5.39) dx ∫

x ∈U

x ∈U

U = {x ∈ X u( x ) =



max u( k ) (5.40) k ∈X

5.3  SOME ADVANCED FUZZY SETS Among different types of fuzzy sets, some frequently applied fuzzy sets include the following. This section defines each of these in brief. • • • •

L-Fuzzy Sets Intuitionistic Fuzzy Sets Interval Type II Fuzzy Sets Fuzzy Sets of Type 2

Assuming that the readers are aware of the concept of lattice, consider a complete lattice ( L , ≤). This indicates that L has an infimum and a supremum. An L-Fuzzy set, FL ( x ) on the universe U can be defined as mapping A : U → L. The fundamental connectives of L-Fuzzy set are given in expressions (5.41 and 5.42) (Bebe, 2013).

P ∧ Q( x ) = inf( P( x ), Q( x )) (5.41)



P ∨ Q( x ) = sup( P( x ), Q( x )) (5.42)



µP + ν P ≤ 1, ∀x ∈U (5.43)



P ∩ Q = (min( µ P , µQ ),max(ν P ,ν Q )) (5.44)



P ∪ Q = (max( µ P , µQ ),min(ν P ,ν Q )) (5.45)



P ∪ Q( x ) = [sup{P( x ), Q( x )},sup{P( x ), Q( x )}] (5.46)

62

Decision Support System



P ∩ Q( x ) = [inf{P( x ), Q( x )},inf{P( x ), Q( x )}] (5.47)



µ A : U → F[0,1] (5.48)



µ A ( x )(u) ∈[0,1], ∀x ∈U and u ∈[0,1] (5.49)

Intuitionistic fuzzy set can be defined as mapping P = ( µ P ,ν P ) : U → [0,1] which satisfies the condition as shown in expression (5.43) (Bebe, 2013). Here, µ P is the degree of membership and ν P is the degree of non-membership. The basic operations for this type of fuzzy sets are provided in expressions (5.44 and 5.45). An Interval Type II Fuzzy set can be defined as mapping (Bebe, 2013) P : U → Ι[0,1] where P( x ) = [ P( x ), P( x )]. Here, P( x ) and P( x ) are the lower and upper bounds of the membership grade (Bebe, 2013). The basic operations for this type of fuzzy sets are provided in expressions (5.46 and 5.47). Type I fuzzy sets are normal fuzzy sets. Type II Fuzzy set is a type of m fuzzy set. It can be defined as membership function as defined in expressions (5.48 and 5.49).

5.4 CONCLUSION This chapter provides a very brief introduction to Fuzzy sets and Fuzzy logic. The field of fuzzy theory is a very vast one, and thus, representing it through a single chapter is just next to an impossible task. Thus, this chapter provides basic concepts in Section 5.1 which is followed by the concepts of Fuzzification and Defuzzification. The last section presents various types of advanced fuzzy sets. No case study is being provided in this chapter since the purpose is to provide only the basic idea to indicate that fuzzy ideas can be applied in various decision-making situations that involve uncertainty.

REFERENCES Bede, B. (2013). Mathematics of Fuzzy Sets and Fuzzy Logic. Springer, London. Mamdani, E. H. (1974). Applications of fuzzy algorithms for control of a simple dynamic plant. Proceedings of the Institution of Electrical Engineers, 121, 12, 1585–1588. Tanaka, K. (1991). An Introduction to Fuzzy Logic for Practical Applications. Springer, Japan. Zadeh, L.A. (1965). Fuzzy sets. Information and Control, 8, 338–353. Zadeh, L.A. (1973). Outline of a new approach to the analysis of complex systems and decision processes. IEEE Transactions on Systems, Man, and Cybernetics, SMC, 3, 28–44.

6

Network Tools

6.1  BASIC CONCEPTS The network tool is a very essential tool in different types of decision-making, especially in decision-making related to a construction project or software project. Stages of any project can be represented by different types of network diagrams. Program evaluation and review technique (PERT) and critical path method (CPM) are the mostly discussed networking tools to optimize these network diagrams as evident from various books and various research articles. This section will present these two tools through a sample network diagram, as shown in Figure 6.1, based on the activity–predecessor relationship, as shown in Table 6.1. As observed from Table 6.1, activities A, B, C do not have any immediate predecessor, and that is the reason these three activities start from a single event 1. It must be noted that events are represented by circles, and activities, representing the tasks or modules of a project, are represented by arrows. The immediate predecessor represents the previous activity that is to be performed before performing the current activity. For example, the immediate predecessor of activity D is activity A and that for activity H is D and E, as shown in Figure 6.1 and Table 6.1. One or more activities performed after an activity is called the successor activities. For example, the successor activities for activity B are activities E and F. Similarly, successor activity for activity A is activity D. The name and duration of each activity are indicated as the labels of the activities. For example, activity 1–2 is labeled A(2), indicating that the name of activity 1–2 is A and the respective duration is two days. Similarly, the other activities are labeled in the same way. Each of the events is numbered following Fulkerson’s rule, which states that the numbering of any subsequent event will be higher than that of the previous event. For example, the previous event for event 5 is event 2 (5 > 2). A network diagram always starts with a single event and ends with a single event. The duration of the entire project represented by this network diagram can be determined by adding the duration of the activities for each path. For example, the duration of the path A – D – I – K is 2 + 4 + 6 + 8 = 20 days. The highest value of all the path lengths will be the project duration since before that time, all the activities of the network cannot be completed. This highest length path is known as the critical path, which is indicated by a double arrow, as shown in Figure 6.2. However, such method of calculating the duration of a project will be very tedious work for a large network. Therefore, the efficient way to find the activity duration and the highest length path is to find the earliest and latest times for completion of each event, as shown in Figure 6.2, and from there, calculate the project duration.

FIGURE 6.1  Network diagram. DOI: 10.1201/9781003307655-6

63

64

Decision Support System

TABLE 6.1 Activity–predecessor relationship for Figure 6.1 Activity

Immediate predecessor

A B C D E F G H I J K

– – – A B B C D, E D, E F, H I

Duration (days)  2  1  3  4  2  9 10  3  6  7  8

FIGURE 6.2  Earliest and latest event times.

In Figure 6.2, each event has been divided into two halves. The upper half contains the event number, and the lower is half is further divided into two halves. The value in the lower left half shows the earliest event time, whereas the value in the lower right half shows the latest time of the event. Earliest event times are calculated in the forward pass method, that is, starting from event 1 and finishing at the last event. The latest event times are calculated by backward pass method, that is, starting from the last event, event number 8 in Figure 6.2, and proceeding backward up to the first event, event 1 of the network. The first event (event 1) starts at relative time “0,” which can represent any starting time, say, 10 AM of 1st June. The duration of activity A is two days. Thus, the earliest time of event 2 is the earliest time of the previous event 1 + duration of activity A = 0 + 2 = 2 days. Similarly, the earliest time of event 4 is  the earliest time of the previous event 1 + duration of activity C = 0 + 3 = 3 days. For an event to which more than one activity is the preceding activity, the highest of the alternative aggregate times will be taken to the earliest time of that event. For example, the earliest time of event 5 is the maximum (earliest time of event 2 + duration of activity D, earliest time of event 3 + duration of activity E = maximum (2+4, 1+2) =6 days. Other earliest times of other events are calculated in a similar way. The backward pass method for calculating the latest event times starts with event 8, the last event. For the first and the last event, the earliest and the latest event times are same. Thus, the latest time of event 8 = earliest time of event 8 = 20 days. The latest time of event 7 = latest time of event 8 − duration of activity K = 20 − 8 = 12 days. For the backward pass method, if more than one activity starts from an event, then the latest time of that

65

Network Tools

TABLE 6.2 Calculation of ES, EF, LS, LF, TF, FF, and IF Activity A B C D E F G H I J K

Immediate predecessor

Duration

ES

EF

LS

LF

TF

FF

IF

– – – A B B C D, E D, E F, H I

 2  1  3  4  2  9 10  3  6  7  8

 0  0  0  2  1  1  3  6  6 10 12

 2  1  3  6  3 10 13  9 12 17 20

 0  3  7  2  4  4 10 10  6 13 12

 2  4 10  6  6 13 20 13 12 20 20

0 3 7 0 3 3 7 4 0 3 0

0 0 0 0 3 0 7 1 0 3 0

0 0 0 0 0 -3 0 1 0 0 0

event will be the minimum value among the alternate calculated lengths. For example, latest time of event 3 = minimum (latest time of event 5 – duration of activity E, latest time of event 6 – duration of activity F) = minimum (6 – 2, 13 – 9) = 4 days. The other latest times for the other activities are calculated in a similar fashion. The path in which all the events have the same earliest and latest event times is the critical path. In Figure 6.2, equal values for earliest and latest event times are observed for events 1, 2, 5, 7, and 8. Thus, the critical path is 1–2–5–7–8 or A – D – I – K, as indicated by a double arrow in Figure 6.2. Next, based on Figure 6.2, for the activities, the following times are calculated: earliest start (ES) time, earliest finish (EF) times, latest start (LS) times, latest finish (LF) times along with some slack or float times such as total float (TF), free float (FF), and independent float (IF), as shown in Table 6.2. Table 6.2 is calculated on the same network diagram as in Figure 6.1 and Figure 6.2. The earliest start time, earliest finish time, latest start time, and latest finish time of an activity indicate how early an activity can start, how early an activity can finish, how late an activity can start, and how late an activity can be finished, respectively. For example, activities A, B, and C start at relative time 0. Activity D can start only after activity A ends, that is, after two days (duration of activity A). Similarly, both activities E and F can start only after activity B ends, that is, on day 1 (duration of activity B). Activities I and H can start only after activities D and E end. This time can be calculated as – maximum (earliest start time of event 2 + duration of activity D, earliest start time of event 3 + duration of activity E) = maximum (2 + 4, 1 + 2) = 6. The calculation here is same as that for the calculation of earliest event time, that is, taking maximum of the alternate path values. The ES values are calculated in a similar fashion. The values of EF are calculated by adding the ES values with the durations. For example, for activity D, EF = ES + duration of D = 2+4 = 6. The values of LF are first calculated in the backward pass method, that is, starting from the last activity K. LF value of activity K = EF value of activity K = 20 days, which is also the LF values for activity G and activity J. LF value for activity H and activity F = latest time of event 8 – duration of activity J = 20 − 7 =13 days. If more than one activity starts from an event, then latest finish time of the predecessor activity is taken to be minimum of the alternate path values. After calculating the activity times, it is required to calculate the float times if the activities. Float or slack times indicate the delays that can happen for each activity. A total of three floats have been calculated for the above example – total float (TF) for finding the total delay for each activity, free float (FF) for finding the total delay happening because of the head event, and independent float, which indicates the delay on the part of the tail event of each activity. Thus, TF, FF, and IF are calculated by expressions (6.1–6.3), respectively.

66

Decision Support System

TF = latest start time − earliest start time = latest finish time/earliest finish time = LS – ES = LF – EF

(6.1)

FF  = TF − head event slack

(6.2)

IF  = FF − tail event slack

(6.3)

For example, for activity B, TF = 3 – 0 = 4 – 1 = 3 days. Head and tail events of activity B are event 3 and event 1, respectively. Event slack is the absolute difference between the earliest and latest times of the event. Therefore, the slack of event 3 = absolute value of (1–4) = 3 and the slack of event 1 = absolute value of (1 − 1) = 0. The FF value for activity B = TF for B =head event slack = 3 – 3 = 0 and the IF value for activity B = FF value for B – tail event slack = 0 – 0 = 0. Similarly, the other values are calculated for the other activities of the network. After the aforementioned calculations, the next issue is to verify whether the project can be completed within less than or more than the current duration, or in other words, the probabilities for completing the project within some other time span need to be verified so as to understand the nature and dependability of the above calculation or to check the robustness of the calculations. For that purpose, based on the same Figure 6.1, consider the values as shown in Table 6.3. Table 6.3 shows three different times, namely, optimistic time (to ), most likely time (tm), and pessimistic time (t p). Optimistic, most likely, and pessimistic times are a kind of representation of uncertainty in the duration of the activities. With these three times for each activity, expected time or duration (te) can be calculated based on expression (6.4). Variances (σ e2) for the activities can also be calculated by expression (6.5) and are shown in Table 6.3. The representative network diagram, for example, in Table 6.3 is shown in Figure 6.3. The critical path for Figure 6.3 is B – E – I – K or 1–3–5–7–8, and the project duration is 24 days, as indicated by the final event, event 8. The expected duration of the critical path is the project duration, which is 24 days. The total variance of the critical path is shown in expression (6.6). These variances and expected durations for the critical paths are required for calculating the probabilities for completing the project within some specific periods of time other than the project duration in order to check the flexibility of the project completion time. The probability of completing the entire project can be represented by normal distribution. The total probability of completing the project within the project duration is certainly 1, representing the total probability of the normal distribution curve. In order to calculate the probability,

TABLE 6.3 Optimistic, most likely, and pessimistic times Activity A B C D E F G H I J K

Optimistic time, to

Most likely time, tm

Pessimistic time, tp

Expected time, te

Variance, σ e2

1 3 2.5 2 4 3.2 4 5 3 4.4 2

2 4 3 4 5 4 6 6 7 6 8

3 5 3.5 6 6 4.8 8 13 17 7.6 8

2 4 3 4 5 4 6 7 8 6 7

1/9 1/9 1/36 4/9 1/9 64/2025 4/9 16/9 49/9 256/2025 1

67

Network Tools

FIGURE 6.3  Network diagram based on Table 6.3.

62.23%

95%

99.73%

–3σ –2σ

–σ

.0

σ





FIGURE 6.4  Standard normal distribution curve.

generally, standard normal distribution with mean 0 is used, as shown in Figure 6.4. Figure 6.4 shows that the area of the curve between ±σ is 62.23%, the area between ±2σ is 95%, and the area between ±3σ is 99.73%. Standard normal distribution or Z-distribution is given by expression (6.7). Thus, the horizontal axis is actually Z-axis. t o + 4t m + t p (6.4) 6



te =



2  t p − to  σ e2 =  (6.5)   6 





∑σ

2 e

=

1 1 49 20 + + +1 = = 6.67 (6.6) 9 9 9 3 f (z) =

1

1 − 2 z2 e (6.7) 2Π

68

Decision Support System

Consider an example of calculating probability values. The duration of the project for Figure 6.3 is 24 days, that is, Te = 24 days. Consider finding the probability of completing the project within 20 days. This means that the probability is required to be calculated. Here, Ts = 20 days.

 T −T  20 − 24  Probability  Z < s e  = Probability  Z <  = Probability( Z < −0.599) (6.8)   σe  6.67 

From the standard normal distribution table, value, probability ( Z < 0.599) = 0.7224 is obtained, which can be represented by Figure 6.5. But, the required probability is Probability( Z < −0.599), indicated by the shaded region, as shown in Figure 6.6. This region is identical with the un-shaded region on the right side of the point, Z = 0.599. Since the normal distribution is symmetrical on both sides of Z = 0, the area of the left side of Z = 0 is same as that on the right side. Thus, in Figure 6.5, the area on the right side of Z = 0 is 0.7224 − 0.5 = 0.2224 , as shown in Figure 6.7. Now, if the this area is subtracted from the total area of 0.5 on the right side of Z = 0, the required area comes out to be 0.5 − 0.2224 = 0.2776 , as indicated in Figure 6.8. Next, consider finding the probability of completing the project within 28 days. This means that the probability is required to be calculated with Ts = 28 days. The required probability is as follows:

 T −T  28 − 24  Probability  Z < s e  = Probability  Z <  = Probability( Z < 0.599) (6.9)   σe  6.67 

Now, the Probability (Z  0 ← if this is true, then A is positive definite. The Hessian matrix to be positive definite is the minimum condition. We now write in matrix form:

SSE = ε T ε = (Y − Xβ )T (Y − Xβ )



∂( SSE ) ∂ (Y − Xβ )T (Y − Xβ )  = − 2 X T (Y − Xβ ) = 0 = ∂β ∂β 



⇒ − X T Y − X T Xβ = 0



⇒ X T Xβ = X T Y



⇒ ( X T X )−1 ( X T X )β = ( X T X )−1 X T Y



⇒ I .β = ( X T X )−1 X T Y



⇒ β = ( X T X )−1 X T Y (13.21)

β = ( X T X )−1 X T Y ← Formula for estimating regression. An example along with the method can clarify the concept. Consider the example as shown in Table 13.3. Table 13.3 shows 12 period’s data on profit and sales that are dependent variables.

240

Decision Support System

TABLE 13.3 Sample Data for Multiple Regression Serial

Profit in Million Currency

Sales Volume in 100

Cash Flow (in 000 Currency Units)

Machine Breakdown in Hours

Man-Hour per Unit

61 146 485 14 346 72 100 26 24 60 26 74

706 1870 4848 367 6131 1754 1679 1295 271 1038 550 701

78 352 788 25 682 120 165 137 29 92 38 136

6 5 4 6 3 7 5 6 9 1 5 6

6 18 144 23 50 10 21 19 3 10 5 6

1 2 3 4 5 6 7 8 9 10 11 12

There are three independent variables, viz. cash flow, machine breakdown in hours, man-hour required per unit of the end product. For the ease of explanation, a small subset of the above example is being taken. Suppose sales volume is Y. The sample data are shown afterward. x1 : Cash flow x 2 : Breakdown hours x3 : Man-hour per unit  706   1870    Y =  4848   367     6131 5×1



   X=   



1 1 1 1 1

78 352 788 25 682

      5× 2

Only 1 independent variable is being considered. The regression coefficients, β = ( X T X )−1 X T Y , need to be calculated. Step 1: Calculate X T X



 1 XTX =   78

1 352

1 788

1 25

  1   682    

1 1 1 1 1

78 352 788 25 682

      

241

Data Warehousing and Data Mining

 5 = 1925 



Step 2: Calculate ( X T X )−1 =

 1925 1216681 

Adj( X T X ) XTX

XTX =



5 1925

1925 1216681

= 2377780

Calculate adjoint of the above matrix as shown below.



 1216681 Adj( X X ) =   −1925 T

T

 1216681 −1925   =  −1925 5  

−1925   5 

Thus, the required inverse of the matrix is  0.5117 ( X T X )−1 =   −0.00081



−0.00081  0.0000021 

Step3: Calculate the following.





 1 X TY =   78

1 352

1 788

1 25

 706   1870   1   4848   682   367     6131 

 13922  =   8724049 

Thus, the regression coefficients are

β = ( X T X )−1 X T Y



 0.5117 =  −0.00081



 57.408  =   7.044 

−0.00081   13922  0.0000021   8724049 

242

Decision Support System

Thus, the regression equation is

Y = β 0 + β1x1 + ε



⇒ Y = 57.408 + 7.044 x1 + ε (13.22)

Based on the above example, the required equation reduces to the following: Y = 57.408 + 7.044 x1 (13.23)

Thus, the error vector is given by



 706   1870    ε = Y − Y =  4848  − Y  367     6131 

Applying expression (13.23) in order to calculate Y , the following expression is obtained:



 706   57.408 + 7.044 × 706  1870   57.408 + 7.044 × 1870    ε = Y − Y =  4848  −  57.408 + 7.044 × 4848  367   57.408 + 7.044 × 367     6131   57.408 + 7.044 × 6131



 706   5030.472   1870   13229.688      =  4848  −  34206.72   367   2642.556       6131   43244.172 



 −4324.472  −11359.688  =  −29358.72  −2275.556   −37113.172

      

      

when p +1 =2, then regression is simple regression. When p +1 ≥ 3, then the regression is known as multiple regression. If data have certain trend, then we apply regression. But regression can be linear or non-linear. The mathematical representation of data for each type of trend is shown below.

Linear trend: Yt = a + bt (13.24)

243

Data Warehousing and Data Mining



Quadratic trend: Yt = a + bt + ct 2 (13.25)



Polynomial trend: Yt = a + bt + ct 2 + dt 3 + ... (13.26)



Logarithmic trend: Yt = a ln t + b (13.27)



Exponential trend: Yt = ae bt (13.28)



Power trend: Yt = at b (13.29)

The methods of solving some of the above-mentioned trends are shown next.

13.3.4 Quadratic Trend Various trend models can be fitted to a set of given data by using least square method, as shown in the previous section. Therefore, quadratic trend model can also be fitted by using least square method as shown below. Consider the quadratic equation given in expression (13.30) before. Yt = a + bt + ct 2 (13.30)



where a, b, and c are the constants to be found out by the method of least square. Based on expression (13.8), the error will be e = Yt − a − bt − ct 2. Thus, the following expression will have to be minimized:

∑ e = ∑ (Y − a − bt − ct ) (13.31) 2 t



2 2

t

t

t

Differentiating the expression (13.31) with respect to the constants a–c, and equating the resultant expressions to zero, we obtain the following expressions:

∑ (Y − a − bt − ct ) = 0 Differentiating expression (13.31) with respect to b, we get −2∑ t (Y − a − bt − ct ) = 0 Differentiating expression (13.28) with respect to c, we get −2∑ t (Y − a − bt − ct ) = 0 Differentiating expression (13.31) with respect to a, we get −2

2

t

2

t

2



2

t

∑ (Y − a − bt − ct ) = 0, we get, ∑Y = an + b∑ t + c∑ t (13.32)

From −2

2

t

2

TABLE 13.4 Sample Data for Showing Quadratic Trend Month Demand

1 100

2 108

3 130

4 143

5 184

6 195

244

Decision Support System

TABLE 13.5 Calculations for Quadratic Trend model

Total





Month, t

Demand, Y

Yt

t2

Yt2

t3

1

100

100

1

100

1

t4 1

2

108

216

4

432

8

16

3

130

390

9

1170

27

81

4

143

572

16

2288

64

256

5

184

920

25

4600

125

625

6

195

1170

36

7020

216

1296

21

860

3368

91

15,610

441

2275

∑ t(Y − a − bt − ct ) = 0, we get, ∑Yt = a∑ t + b∑ t + c∑ t (13.33)

From −2

t

2

2

3

∑ t (Y − a − bt − ct ) = 0 , we get, ∑Yt = a∑ t + b∑ t + c∑ t (13.34)

From −2

2

t

2

2

2

3

4

∑ ∑ ∑ ∑ ∑ ∑ ∑

Table 13.5 calculates Y, t, Yt, t2, Yt 2 , t3, t 4 for the demand data provided in Table 13.6. Substituting the values from Table 13.5 in expressions (13.32–13.34), we obtain the following equations:

6a + 21b + 91c = 860 (13.35)



21a + 91b + 441c = 3368 (13.36)



91a + 441b + 2275c = 15610 (13.37)

Solving the expressions (13.35–13.37), we get the values of the constants as a = 86.9, b = 9.082, and c = 1.625, and thus, the quadratic expression becomes

Y = 86.9 + 9.082t + 1.625t 2 (13.38)

Now if we want to find the forecast for period 7 or period 8, then we will get the values by substituting t = 7 and t = 8 in expression (13.38) as shown below.

Y7 = 86.9 + 9.082 × 7 + 1.625 × 72 = 230.099 (13.39)



Y8 = 86.9 + 9.082 × 8 + 1.625 × 82 = 263.556 (13.40)

Polynomial trend model can also be fitted in similar fashion using least square method.

245

Data Warehousing and Data Mining

13.3.4.1  Logarithmic Trend Consider the logarithmic equation given in expression (13.41) before. Yt = a ln t + b (13.41)



where a, b are the constants to be found out by the method of least square. Based on expression (13.41), the error will be et = Yt − a ln t − b . Thus, the following expression will have to be minimized.

∑e



t

2

= (Yt − a ln t − b)2 (13.42)

t

Differentiating the expression (13.41) with respect to the constants a and b and equating the resultant expressions to zero, we obtain the following expressions: ln t (Yt − a ln t − b) = 0 Differentiating expression (13.41) with respect to a, we get −2

∑ ∑ (Y − a ln t − b) = 0

Differentiating expression (13.41) with respect to b, we get −2

t

∑ ln t(Y − a ln t − b) = 0, we get ∑Y ln t = a∑ (ln t) + b∑ ln t (13.43)



From −2



From −2

Table 3.12 calculates

2

t

∑ (Y − a ln t − b) = 0, we get ∑Y = a∑ ln t + bn (13.44) t

∑Y , ∑Y ln t, ∑ (ln t) , ∑ ln t for the demand data provided in Table 13.6. 2

Substituting the values from Table 13.6 in expressions (13.43 and 13.44), we obtain the following expressions:

9.409a + 6.579b = 1061.408 (13.45)



6.579a + 6b = 860 (13.46)

TABLE 13.6 Calculations for Logarithmic Trend model

Total

Month, t

Demand, Y

1

100

0

0

0

2

108

0.693

74.844

0.48

3

130

1.099

142.87

1.208

4

143

1.386

198.198

1.921

5

184

1.609

296.056

2.589

6

195

1.792

349.44

3.211

860

6.579

1061.408

9.409

lnt

Y  ln t

(ln t)2

246

Decision Support System

Solving the expressions (13.44 and 13.45), we get the values of the constants as a = 53.95, b = 84.18, and thus, the quadratic expression becomes Yt = 53.95ln t + 84.18 (13.47)



Now if we want to find the forecast for period 7 then we will get the values by substituting t = 7 in expression (13.46) as shown below. Y7 = 53.95ln 7 + 84.18 = 189.17 (13.48)



The other type of nonlinear trend can also be fitted in the similar way using the least square method. 13.3.4.1.1  Link Analysis Link analysis is a part of descriptive analytics. The basic target of descriptive analytics is to find the patters of customer behavior. Unlike predictive analytics, there is no target variable in descriptive analytics. That’s why such analytics is also known as unsupervised learning since there is no target variable to drive the learning process here. Three most common types of descriptive analytics are given as follows: 1. Association rules: To detect frequently occurring patterns between items. Example – detecting which products are frequently purchased together in a supermarket context; detecting which words frequently co-occur in a text document; and detecting which elective courses are frequently chosen together in a university context. 2. Sequence rules: To detect sequences of events. Example – detecting sequences of purchasing behavior in a supermarket context; detecting sequence of web page visits in a web mining context; and detecting sequences of words in a text document. 3. Clustering: To detect homogeneous segments of observations. Example – ­differentiate between brands in a marketing portfolio; segment customer population for targeted marketing. 13.3.4.2  Association Rules Steps: • • • •

Defining basic concept Deciding over the support and confidence Association rule mining process Lift measure

13.3.4.3  Basic Concept Association rule generally starts with database of transactions, D. Each transaction is identified by a transaction identifier and set of items which can be product or web page or a course (i1 , i2 ,..., in ) to choose from the set of items, I, for example. Consider item sets in Table 13.7. Association rule is an implication of the form X ⇒ Y (meaning, X implies Y), where X ⊂ I, Y ⊂ I , and X ∩ Y = ∅ . X is known as rule antecedent, and Y is known as rule consequent. For example, if a customer purchases Chow Mein, then the customer purchases tomato sauce; if a customer purchases chocolate, then the customer purchases cold drinks; if a student chooses operations research, then the student chooses advanced operations research. Therefore, it must be noted that the association rules are stochastic in nature, and thus, they must not be taken as the universal fact or truth. The association rules are characterized by statistical measures quantifying the strength of the association. Also, the rules measure correlational associations and should not be interpreted in a causal way.

247

Data Warehousing and Data Mining

TABLE 13.7 Example of Transaction Data Set Transaction Identifier 1 2 3 4 5 6 7 8 9 10

Items Chow Mein, tomato sauce, biscuit, chocolate Maggi, chocolate, ice cream bar, cold drinks Cold drinks, cheese, butter, chocolate Pulse, chocolate, maggi, sugar free, glucose Chocolate, butter, cold drinks, maggi, cheese, bread Cold drinks, butter, maggi, cream biscuit, chocolate Rice, pulse, salt, mustard oil, sunflower oil Shampoo, soap, dishwasher bar, scotch bite, phenyl Chow Mein, tomato sauce, pepper powder, salt, biscuit Maggi, tomato sauce, chocolate, cream biscuit, cold drinks

13.3.4.4  Support and Confidence Support and confidence are used to measure the strength of association rule. The support of an item set is defined as the percentage of total transactions in the database that contains the item set. Hence, the rule X ⇒ Y has support (s) if 100s % of the transactions in D contain X ∪ Y Mathematically, we have the following expression for support:    Support (X ∪ Y )      = (number of transactions supporting X ∪ Y )/(total number of transactions)

(13.49)

For example, based on Table 13.1, the support for chocolate and cold drinks ⇒ maggi is = 4/10 = 0.4 →40% A frequent item set is one for which the support is higher than a threshold (minsup) that is typically specified upfront by the business user or data analyst. A lower (higher) support will obviously generate more (less) frequent item sets. The confidence measures the strength of the association and is defined as the conditional probability of the rule consequent, given the rule antecedent. The rule X ⇒ Y has confidence (c) if 100c % of the transactions in D that contain X also contain Y. It can be formally defined as given below. Again, the data analyst has to specify a minimum confidence (minconf) in order for an association rule to be considered interesting.



confidence( X ⇒ Y ) = P (Y X ) =

support( X ∪ Y ) (13.50) support( X )

For example, based on Table 13.1, the confidence for chocolate and cold drinks ⇒ maggi is = 4/5 =0.8 → 80%. 13.3.4.5  Association Rule Mining Mining association rules from data are essentially a two-step process as follows:

1. Identification of all item sets having support above minsup (i.e., “frequent” item sets). 2. Discovery of all derived association rules having confidence above minconf.

As said before, both minsup and minconf need to be specified beforehand by the data analyst. The first step is typically performed using the Apriori algorithm.1. The basic notion of a priori states that

248

Decision Support System

every subset of a frequent item set is frequent as well or, conversely, every superset of an infrequent item set is infrequent. This implies that candidate item sets with k items can be found by pairwise joining frequent item sets with k−1 items and deleting those sets that have infrequent subsets. The number of candidate subsets to be evaluated can be decreased, which will substantially improve the performance of the algorithm, because fewer database passes will be required. The Apriori algorithm is illustrated below. Once the frequent item sets have been found, the association rules can be generated in a straightforward way, as follows: • For each frequent item set k, generate all nonempty subsets of k. • For every nonempty subset s of k, output the rule s ⇒ k − s if the confidence > minconf. Note that the confidence can be easily computed using the support values that were obtained during the frequent item set mining. For example, for the frequent item set {chocolate, maggi, cold drinks}, the following association rules can be derived: Chocolate, maggi ⇒ cold drinks [conf = 4/5 = 0.8 → 80%] Chocolate, cold drinks ⇒ maggi [conf = 4/5 = 0.8 → 80%] Maggi, cold drinks ⇒ chocolate [conf = 4/4 = 1 → 100%] Chocolate ⇒ maggi, cold drinks [conf = 4/7 = 0.57 → 57%] Cold drinks ⇒ maggi, chocolate [conf = 4/5 = 0.8 → 80%] Maggi ⇒ chocolate, cold drinks [conf = 4/5 = 0.8 → 80%] If the minconf is set to 80%, all except the fourth association rules will be kept for further analysis. 13.3.4.6  Lift Measure Sometimes, measurement of only confidence does not reveal the true picture. In that case, real picture is reflected by lift measure or interestingness measure. List measure considers the prior probability of the rule consequent as shown below.

Lift( X → Y ) =

support( X ∪ Y ) (13.51) support( X ) * support(Y )

Lifts value less (larger) than 1 indicates a negative (positive) dependence or substitution (complementary) effect. For example, for the frequent set Maggi ⇒ chocolate, cold drinks, support is, support( X ) = 5 / 10 = 0.5 and support(Y ) = 5 / 10 = 0.5 and support( X ∪ Y ) = 4 / 10 = 0.4. Thus, 0.4 = 1.6 Lift (Maggi → chocolate, cold drinks) =  0.5* 0.5 Such value clearly indicates the complementary effect between maggi and chocolate and cold drinks. 13.3.4.7  Sequence Rules Given a database D of customer transactions, the problem of mining sequential rules is to find the maximal sequences among all sequences that have certain user-specified minimum support and confidence. An example could be a sequence of web page visits in a web analytics setting, as follows: Home page → Electronics →Cameras and Camcorders → Digital cameras →Shopping cart →Order Confirmation →Return to shopping Although association rules are concerned about what items appear together at the same time (intratransaction patterns), sequence rules are concerned about what items appear at different times (intertransaction patterns). To mine the sequence rules, one can again make use of the a priori property because if a sequential pattern of length k is infrequent, its supersets of length k + 1 cannot be frequent. Consider the following example (see Table 13.8) of a transactions data set in a web analytics setting. The letters A, B, C, … refer to web pages.

249

Data Warehousing and Data Mining

TABLE 13.8 Example of Transaction Data Session ID

Page

Sequence

A B C B C A C D A B D D C A

1 2 3 1 2 1 2 3 1 2 3 1 1 1

1 1 1 2 2 3 3 3 4 4 4 5 5 5

Session-wise version can be given as follows: Session 1: A, B, C Session 2: B, C Session 3: A, C, D Session 4: A, B, D Session 5: D, C, A One can now calculate the support in two different ways. Consider, for example, the sequence rule A → C. First approach would be to calculate the support whereby the consequent can appear in any subsequent stage of the sequence. In this case, the support becomes 2/5→40%. Another approach would be to only consider sessions in which the consequent appears right after the antecedent. In this case, the support becomes 1/5→20%. The confidence is = 2/4 → 50% or ¼ → 25%. 13.3.4.8  Segmentation The aim of segmentation is to split up a set of customer observations into segments such that the homogeneity within a segment is maximized (cohesive) and the heterogeneity between segments is maximized (separated). Some examples are given as follows: • • • •

Efficiently allocating marketing resources. Identifying most profitable customers. Identifying the need for new product. Differentiating between brands in a portfolio.

Various types of clustering data can be used, such as demographic, lifestyle, attitudinal, behavioral, acquisitional, social network, and so on. The various clustering techniques are summarized in Figure 13.9. 13.3.4.7.1  Hierarchical Clustering A divisive hierarchical clustering starts from the whole data set in one cluster and then breaks this up in each time smaller clusters until one observation per cluster remains (Figure 13.10). Agglomerative clustering works the other way around, starting from all observations in one cluster and continuing to merge the ones that are most similar until all observations make up one big cluster (Figure 13.11).

250

Decision Support System

FIGURE 13.9  Classification of clustering techniques.

FIGURE 13.10  Divisive hierarchical clustering (read from right to left).

FIGURE 13.11  Agglomerative hierarchical clustering (read from left to right).

In order to decide on the merger or splitting, a similarity rule is needed. Examples of popular similarity rules are the Euclidean distance and Manhattan (city block) distance. It is obvious that the Euclidean distance will always be shorter than the Manhattan distance. For example, consider Figure 13.12. The Euclidian distance between points A and B is

= (25 − 10)2 + (50 − 30)2

whereas the Manhattan distance between A and B is

= 50 − 30 + 25 − 10

Various schemes can now be adopted to calculate the distance between two clusters as shown in Figure 13.13.

Data Warehousing and Data Mining

251

FIGURE 13.12  Coordinates of points in 2D space.

FIGURE 13.13  Calculation of distances between clusters.

The single linkage method defines the distance between two clusters as the shortest possible distance, or the distance between the two most similar objects. The complete linkage method defines the distance between two clusters as the biggest distance, or the distance between the two most dissimilar objects. The average linkage method calculates the average of all possible distances. The centroid method calculates the distance between the centroids of both clusters. Finally, Ward’s method merges the pair of clusters that leads to the minimum increase in total within cluster variance after merging. In order to decide on the optimal number of clusters, one could use a dendrogram or a scree plot. A dendrogram is a tree-like diagram that records the sequences of merges. The vertical (or horizontal scale) then gives the distance between two clusters amalgamated. One can then cut the dendrogram at the desired level to find the optimal clustering. This is illustrated in Figures 13.14 and 13.15 for a birds’ clustering example. A scree plot (see Figure 13.16) is a plot of the distance at which clusters are merged. The elbow point then indicates the optimal clustering.

252

FIGURE 13.14  Example of clustering birds.

FIGURE 13.15  Dendrogram for birds example.

FIGURE 13.16  Scree plot for clustering.

Decision Support System

253

Data Warehousing and Data Mining

13.3.4.9  K-Means Clustering K-means clustering is a nonhierarchical procedure that works along the following steps: 1. Select k observations as initial cluster centroids (seeds). 2. Assign each observation to the cluster that has the closest centroid (for example, in Euclidean sense). 3. When all observations have been assigned, recalculate the positions of the k centroids. 4. Repeat until the cluster centroids no longer change. A key requirement here is that the number of clusters, k, needs to be specified before the start of the analysis. It is also advised to try out different seeds to verify the stability of the clustering solution. 13.3.4.10  Self-Organizing Maps A self-organizing map (SOM) is an unsupervised learning algorithm that allows you to visualize and cluster high-dimensional data on a low-dimensional grid of neurons. A SOM is a feedforward neural network with two layers. The neurons from the output layer are usually ordered in a twodimensional rectangular or hexagonal grid (see Figure 13.17). For the former, every neuron has at most eight neighbors, whereas for the latter, every neuron has at most six neighbors. Each input is connected to all neurons in the output layer with weights w = [ w1 , w2 ,..., wN ], with N the number of variables. All weights are randomly initialized. When a training vector x is presented, the weight vector w c of each neuron c is compared with x, using, for example, the Euclidean distance metric (beware to standardize the data first): N



∑ (x −w ) (13.52)

d ( x , wc ) =

i

ci

2

i =1

The neuron that is most similar to x in Euclidean sense is called the best-matching unit (BMU). The weight vector of the BMU and its neighbors in the grid are then adapted using the following learning rule:

wi (t + 1) = wi (t + 1) + hci (t )[ x (t ) − wi (t )] (13.53)

whereby t represents the time index during training and hci (t ) defines the neighborhood of the BMU c, specifying the region of influence. The neighborhood function hci (t ) should be a non-increasing function of time and the distance from the BMU. Some popular choices are given as follows:

 r −r 2 hci (t ) = α (t ) exp  − c 2 i  (13.54)  2σ (t )  2

hci (t ) = α (t ) if rc − ri ≤ threshold , 0 otherwise

(13.55)

where rc and ri represent the location of the BMU and neuron i on the map, σ 2 (t ) represents the decreasing radius, and 0 ≤ α (t ) ≤ 1, the learning rate (e.g., α (t ) = A / (t + B), α (t ) = exp(− At )). The decreasing learning rate and radius will give a stable map after a certain amount of training. Training is stopped when the BMUs remain stable, or after a fixed number of iterations (e.g., 500 times the number of SOM neurons). The neurons will then move more and more toward the input observations, and interesting segments will emerge. SOMs can be visualized by means of a U-matrix or component plane.

254

Decision Support System

FIGURE 13.17  Rectangular versus hexagonal SOM grid.

• A U (unified distance)‐matrix essentially superimposes a height Z dimension on top of each neuron visualizing the average distance between the neuron and its neighbors, whereby typically dark colors indicate a large distance and can be interpreted as cluster boundaries. • A component plane visualizes the weights between each specific input variable and its output neurons, and as such provides a visual overview of the relative contribution of each input attribute to the output neurons. Database Segmentation 13.3.4.11  Database segmentation is a type of cluster analysis (CA) for data. Therefore, this section depicts CA in brief, part of which has already been discussed above. But, clustering is again being discussed under database segmentation with a different view. 13.3.4.12  Clustering for Database Segmentation Suppose there are n number of observations. All the variables up to x1 , x 2 ,..., x p are characteristics features for each of these variables. Based on these characteristic features, if we want to group the observations into several groups, then this is called grouping of the observations. This is known as CA. The mathematics behind CA is different from that used in factor analysis. There is only one similarity with factor analysis – here also, we apply grouping. • “Cluster is a number of things of the same kind growing or joined together.” • A group of homogeneous things. • The principle: • Objects in the same group are similar to each other. • Objects in different groups are as dissimilar as possible. The grouping is based on similarity in a group and based on dissimilarity between groups. Cluster Analysis: A Process Model (Figure 13.18) 13.3.4.13  Criteria for Clustering 13.3.4.131  • Variables to be considered • Important variables are to be considered and trivial variables are to be discarded. • Variables may be of different types based on measurements like nominal, ordinal, interval, and ratio.

255

Data Warehousing and Data Mining

FIGURE 13.18  Process model.

TABLE 13.9 Distance Measures A B C D E

A

B

C

D

E

0 dab dac dad dae

dab 0 dbc dbd dbe

dac dbe 0 dcd dce

dad dbd dcd 0 dde

dae dbe dce dde 0

TABLE 13.10 Variable Representation Object

Variable 1

Variable 2



Variable P

x11 x12 … x1n

x 21 x 22 … x2n

… … … …

x p1 x p2 … x pn

1 2 … N

• Similarity and dissimilarity measures. • It is usually a measure of distance between the objects to be clustered. Consider 5 objects: A–E. We can go for distance measurements as shown in Table 13.9. Distances, d’s, must be known. If it is 100-point scale, then instead of 0, we would put 100 or 1, in some cases. 13.3.4.13.2  Variable Representation (Table 13.10) Data here is the “safety performance data” as shown before. Distance Measures 13.3.4.12.3  • Euclidian distance

d (i, j) = {( xi1 − x j1 )2 + ( xi 2 − x j 2 )2 + ... + ( xip − x jp )2}1/2 • Squared Euclidian distance



d (i, j) = {( xi1 − x j1 )2 + ( xi 2 − x j 2 )2 + ... + ( xip − x jp )2}

256

Decision Support System

• Manhattan distance

d (i, j) = ( xi1 − x j1 ) + ( xi 2 − x j 2 ) + ... + ( xip − x jp )

}

• Minkowski distance

m

m

d (i, j) = ( xi1 − x j1 ) + ( xi 2 − x j 2 ) + ... + ( xip − x jp )

}

m 1/ m

Similarly, all the other distances can also be calculated by their respective formulas. So, as in the process model, for example, on “safety performance,” things/objects to be clustered: 10 departments’ characteristics to be measured: MIS, MSS, and MEDS. Obtain similarity/dissimilarity measures: Various types of distances like Euclidian distances have been measured. 13.3.4.13.4  Clustering Algorithms • Hierarchical joining algorithms • Non-hierarchical joining algorithms, such as k-means clustering 13.3.4.13.5  Hierarchical Joining Algorithms • Single linkage (nearest neighbor): distance between two clusters = distance between two closest members of two clusters • Complete linkage (furthest neighbor): distance between two clusters = distance between two most distant cluster members • Centroid linkage: distance between two clusters = distance between multivariate means of each cluster • Average linkage: distance between two clusters = average distance between all members of two clusters • Median linkage: distance between two clusters = median distance between all members of the two clusters • Ward linkage: distance between two clusters = average distance between all members of the two clusters with adjustment for covariances For centroid linkage, calculate the mean for all clusters and find the centroid of all the means. These distance measures are applicable only when we have made groups.

A

B

A

0

B

2

0

C

5

3

C

0

Suppose we have not made any group. Suppose there are n observations and we know the distances. For example, here first find out the smallest distance. Here, “2” is the smallest distance which is between A and B. Thus, A and B can be grouped. Initially, before grouping, each of these A, B, C can be regarded as cluster. After grouping, the clusters become AB and C.

257

Data Warehousing and Data Mining

Now, find the distance matrix as shown below. Here, any of the above-mentioned methods can be chosen, and these distances can be measured. For example, if single linkage is applied, then find distance between A & C and B & C. Here, d(A, C) = 5, d(B, C) = 3, so put “3” in the matrix as shown below. Similarly, for complete linkage, the furthest distance is “5,” for this example, so put “5.”

AB C

AB

C

0 3

3 0

13.3.4.13.6  Agglomerative Hierarchical Clustering Algorithm Step 1: Identify the variables “p” and objects “n.” Step 2: Collect data (X n× p ). Step 3: Select similarity or dissimilarity measures. Step 4: Obtain distance matrix (Dn× n ). Step 5: Start with n clusters where each cluster contains a single entity. Step 6: Find out the nearest pairs of clusters from Dn× n . Let the distance between most similar clusters P and Q be d PQ . Step 7: Merge clusters P & Q and label the newly formed cluster as (PQ). Update the entries of the distance matrix D by: • Deleting the rows and columns corresponding to clusters P & Q and • Adding a row and column giving the distances between cluster (PQ) and the remaining clusters. Step 8: Repeat Steps 6 and 7 for a total of (n−1) times. When all the objects will be in a single cluster, the algorithm terminates. Step 9: Record the identity clusters that are merged and the levels (distances or similarities) at which the merges take place. For example. At first, there are five objects – A, B, C, D, and E. A 7 B are found similar and thus grouped. Now we have (5−1)=4 clusters as shown below. So, here we are deleting rows and columns of A and B and adding new row and column AB. So as we proceed in this way, the number of objects in the clusters will increase. If we are, for example, happy with 70% similarity, then we keep these clusters. Example: Single linkage (nearest neighbor) (see Figure 13.19)

Minimum Distance 0 2 3 5 6

Cluster 1,2,3,4,5 (5,3), 1,2,4 (1,3,5), 2,4 (1,3,5), (2,4) (1,3,5,2,4

258

Decision Support System

FIGURE 13.19  Dendrogram.

Example: Complete linkage (farthest neighbor)

Maximum Distance 0 11 10 9 8

13.3.4.13.7  Average Linkage

Cluster 1,2,3,4,5 (1,5), 2,3,4 (2,1,5), 3,4 (2,1,5), (3,4) (2,1,5,3,4)

259

Data Warehousing and Data Mining

Minimum Distance 0 2 3 4.5 7.25

Cluster 1,2,3,4,5 (1,2), 3,4,5 (1,2), 3,(4,5) (1,2), (3,4,5) (1,2,3,4,5)

13.3.4.13.8  K-Means Clustering Algorithm Step 1: Identify the variables, p and objects, n. Step 2: Collect data (X n× p ). Step 3: Select similarity or dissimilarity measures. Step 4: Obtain distance matrix (Dn× n ). Step 5: Partition the objects into k initial clusters. Step 6: Proceed through the list of objects, assigning an object to the cluster whose centroid (mean) is the nearest. Step 7: Re-calculate the centroid for the cluster receiving the new object and for the clusters losing the object. Step 8: Repeat Steps 6 and 7 until no more re-assignments take place. Example: Variables: 2 (x1, x2); objects: 4 (A, B, C, D)

Distance between AB and A = coordinates of AB – coordinates of A = (2,2) – (5,3) = (2 − 5)2 + (2 − 3)2 = 10 Distance between AB and B = coordinates of AB – coordinates of B = (2,2) – (−1,1) = (2 + 1)2 + (2 − 1)2 = 10 Distance between AB and C = coordinates of AB – coordinates of C = (2,2) – (1, −2) = (2 − 1)2 + (2 + 2)2 = 17

260

Decision Support System

Distance between AB and D = coordinates of AB – coordinates of D = (2,2) – (−3, −2) = (2 + 3)2 + (2 + 2)2 = 41 Distance between CD and A = coordinates of CD – coordinates of A = (−1, −2) – (5,3) = (−1 − 5)2 + (−2 − 3)2 = 61 Distance between CD and B = coordinates of CD – coordinates of B = (−1, −2) – (−1,1) = (−1 + 1)2 + (−2 − 1)2 = 9 Distance between CD and C = coordinates of CD – coordinates of C = (−1, −2) – (1, −2) = (−1 − 1)2 + (−2 + 2)2 = 4 Distance between CD and D = coordinates of CD – coordinates of D = (−1, −2) – (−3, −2) = (−1 + 3)2 + (−2 + 2)2 = 4 Thus, we have,

Since d ( AB, A) < d (CD, A), A belongs to cluster AB. Since d (CD, B) < d ( AB, B), B belongs to cluster CD. Since d (CD, C ) < d ( AB, C ), C belongs to cluster CD. Since d (CD, D) < d ( AB, D), D belongs to cluster CD. Thus, the new clusters are A and BCD.

Coordinates of Centroid Cluster A (BCD)

X1

X2

5

3

(−1) + 1 + (−3) = −1 3

1 + (−2) + (−2) = −1 3

Next step The distances now are as follows: Distance between A and B = coordinates of A – coordinates of B = (5,3) – (−1,1) = (5 + 1)2 + (3 − 1)2 = 40 Distance between A and C = coordinates of A – coordinates of C = (5,3) – (1, −2) = (5 − 1)2 + (3 + 2)2 = 41 Distance between A and D = coordinates of A – coordinates of D = (5,3) – (−3, −2) = (5 + 3)2 + (3 + 2)2 = 89 Distance between BCD and A = coordinates of BCD – coordinates of A = (−1, −1) – (5,3) = (−1 − 5)2 + (−1 − 3)2 = 52 Distance between BCD and B = coordinates of BCD – coordinates of B = (−1, −1) – (−1,1) = 4 Distance between BCD and C = coordinates of BCD – coordinates of C = (−1, −1) – (1, −2) =5 Distance between BCD and D = coordinates of BCD – coordinates of D = (−1, −1) – (−3, −2) = 5

261

Data Warehousing and Data Mining

Objects

A

B

C

D

A (BCD)

0 52

40 4

41 5

89 5

Since d ( A, A) < d ( BCD, A) , A belongs to A. Since d ( BCD, B) < d ( A, B), B belongs to BCD. Since d ( BCD, C ) < d ( A, C ) , C belongs to BCD. Since d ( BCD, D) < d ( A, D) , D belongs to BCD. Thus, the final clusters are A and BCD. 13.3.4.13.9  Deviation Detection Deviation detection mechanism detects the deviations or outliers in database. Such deviations can be measured by applying various statistical techniques as well as visualization techniques.

13.4  CONCLUSION This chapter provides an enlightening discussion on the various aspects of data warehousing and data mining. Data warehousing has been depicted in brief, whereas data mining has been elaborated to some extent. Data mining section covers both regression and clustering concepts along with some other concepts. Readers are expected to enjoy reading and learning in this chapter.

REFERENCES Hammergren, T. C., Simon, A. R. (2009). Data Warehousing for Dummies, 2nd Edition. Wiley Publishing, Inc., Indianapolis, Indiana. Witten, I. H., Frank, E., Hall, M. A., Pal, C. J. (2017). Data Mining: Practical Machine Learning Tools and Techniques. Elsevier, Inc., USA.

14

Intelligent Decision Support System

14.1  INTRODUCTION The word ‘intelligence’ from machine perspective indicates the application of artificial intelligence. In other words, it indicates the implementation of human intelligence in machines and related applications. In case of intelligent decision support system (IDSS), the purpose is to automate the decision-making process so as to imitate the human decision-makers. Thus, a significant number of tools and techniques can be mentioned for intelligent decision-making tools. This chapter mentions a few of those, namely, enterprise information system (EIS), which is discussed in brief; knowledge management (KM), which includes a significant number of nature-based algorithms that are basically based on different natural phenomena; artificial intelligence (AI), which discusses various aspects of it. However, the nature-based algorithms and many other techniques are also included as the techniques under AI. AI techniques include many other tools, techniques, and methods, such as, natural language processing. Some of these areas of AI have been briefed in this chapter. Besides, taking the smart technology under consideration as we are living in a smart age, some aspects of smart technologies such as cloud computing have also been introduced very briefly in this chapter. At the end of this chapter, latest research trends in the field of intelligent decision-making have also been mentioned from the existing literature. Thus, the following section discusses EIS very briefly.

14.2  ENTERPRISE INFORMATION SYSTEM Various aspects of EIS can be discussed in this section. However, from the aspect of the application and the use of various latest technologies which take care of security aspect of data communication, short discussion on semantic service-oriented architecture (SOA) are being introduced in this section. Semantic SOA-based model helps both the users and the businesses to understand the data, especially the vast amount of complex data in data warehouses (Cruz-Canha and Varajão, 2011). Semantic SOA makes use of semantic web services. Web-based SOA generally uses different web technologies in order to increase the interoperability. Such technologies include Universal Description Discovery and Integration (UDDI), Web Service Description Language (WSDL), and Simple Object Access Protocol (SOAP). One of the examples of web-based SOA is Federated Enterprise Resource Planning (FERP), which is based on existing web service technologies and has several advantages such as the following: • The functionalities can be reused. • Any kind of adaptation is not required in order to switch from one particular service to another one. • It is flexible in terms of application since it is based on standards. However, in general, SOA-based applications may have many disadvantages, such as lack of interoperability, huge unstructured data, and increasing number of systems that wait to be linked. Semantic web is a kind of extension of World Wide Web facilitating the humans and machines to

DOI: 10.1201/9781003307655-14

263

264

Decision Support System

understand complex data. Semantic web can also integrate intra- and inter-organizational processes well, increase interoperability, and can satisfy requests of both human users and machines. Web services are very relevant in SOA architecture since web service is the main unit of Semantic SOA. The main components of SOA architecture are: • Web service provider • UDDI registry • Web service consumer Semantic SOA-based applications may have the following advantages: • • • • •

Automatic service discovery. Automatic service invocation. Automatic service composition. Automatic service interoperation. Automatic service execution monitoring.

A Semantic SOA-based model may have the following components: • • • • • • • • • • •

Functions required for user interfaces. Required business processes described in XML-based workflow language. Web service consumer system, using XML schema. Web service provider system with the help of HTTP (Hypertext Transfer Protocol). Web service directory. Semantic web service system. Cloud service provider. Semantic mediator-based system. Validator. Validation repository. Cloud directory.

There are many other essential aspects can be also be discussed. But since the purpose of this chapter is to present glimpse of various aspects of intelligent decision support system, the next section discussed KM.

14.3  KNOWLEDGE MANAGEMENT In simple words, it can be said that processed data are information and processed information is knowledge. Data consist of facts, raw numbers, or assertions. Information is the processed or manipulated data in order to find the relevance and purpose of the data. Processed information with an intention to take decision or action or to set direction to information is known as knowledge. For example, consider the demand data for a particular product for 10 months. This set of demands is the raw data as collected. Now forecasting methods may be applied on this demand data to forecast the future demand for the subsequent months this becomes information obtained by processing the raw data. Now the errors in the forecasted demands may be calculated after the actual demand is realized. When this error and the forecasted demands will be used to find the reason for the modification of the forecasting method, then it will become the knowledge. Thus, knowledge may help to reduce information from the original amount of data. According to Wiig’s (1999), the definition of knowledge can be presented as:

Intelligent Decision Support System

265

“Knowledge consists of truths and beliefs, perspectives and concepts, judgments and expectations, methodologies, and know-how. It is possessed by humans, agents, or other active entities and is used to receive information and to recognize and identify; analyze, interpret, and evaluate; synthesize and decide; plan, implement, monitor, and adapt—that is, to act more or less intelligently. In other words, knowledge is used to determine what a specific situation means and how to handle it” From the subjective view, knowledge can be taken to be state of mind or a practice. From the objective view, knowledge can be regarded as an object or a way to access information or just as capability (Becerra-Fernandez and Sabherwal, 2015). There can be different types of knowledge. From the perspective of the use of knowledge, knowledge can be social, individual, causal, conditional, rational, or pragmatic. Knowledge can also be declarative or substantive knowledge, which are basically facts about the relationships among different variables, for instance and procedural knowledge, which describes a procedure. For example, knowledge about how to write a research paper or thesis is procedural knowledge. From the perspective of the expression of the knowledge, it can be classified into explicit, implicit, and tacit knowledge. • Explicit knowledge: Knowledge that can be easily expressed in words or numbers and can be shared through discussion or by writing it down and putting it into documents, manuals, models, or databases. • Implicit knowledge: Knowledge that is automatically present or inherent. • Tacit knowledge: The knowledge of know-how that people carry in their heads – skills, experiences, insight, intuition, and judgment. It is difficult to articulate or write down – shared between people through discussion, stories, and personal interactions. Explicit knowledge can both be written and said. Thus, whatever content, this chapter has, can be read and written indicating explicit knowledge. Implicit knowledge can be written but has not been written yet. For example, the depth of understanding of the author from reading a particular chapter is written but mostly, it is not written by most of us although we can convey it to our friends and acquaintances. Tacit knowledge cannot be written down. For example, suppose that, just by speaking to somebody casually with a person on some matter, you had been able to read his mind through his facial expression. Such knowledge is tacit in nature. Such type of knowledge can be converted from form to the other form. • The conversion of tacit knowledge to tacit knowledge is called socialization. For example, the above understanding resulted from a person’s mind reading can be said confidentially to a very close friend which is basically ‘socialization’. • The conversion of tacit knowledge to explicit knowledge is called externalization. For example, if the abovementioned understanding about several persons’ mind is written somewhere, then it will be explicit knowledge. • The conversion of explicit knowledge to tacit knowledge is known as internalization. For example, while reading story books, we generally visualize the characters, situations and scenarios in our mind automatically. This kind of visualization represents tacit ­k nowledge for this example. Such visualization may be drawn on paper converting that tacit ­k nowledge again back to explicit knowledge. • The conversion of explicit knowledge in one form to explicit knowledge in another form is known as combination. For example, when students write in answer scripts in examinations after reading the books, the explicit knowledge in the form of book is converted to writing in answer script. This is an example of converting explicit knowledge in one form (book) to explicit knowledge in another form (answer script).

266

Decision Support System

Some issues on this kind of conversion need to be mentioned: • Conversions between tacit and explicit knowledge are particularly important. • Only by tapping into tacit knowledge can new and improved explicit knowledge can be created. In turn, better explicit knowledge is essential for stimulating the development of new, high level, tacit knowledge. • KM tended to focus on improving & managing explicit knowledge, knowledge creation and application require for more attention on the high level tacit knowledge. From the perspective of type of possession of knowledge, knowledge can be general knowledge and specific knowledge or idiosyncratic knowledge. General knowledge is common among individuals. For example, knowledge about cricket for people who love cricket is general knowledge. Specific knowledge is gained by few individuals, not by many. For example, not many people know about the subject, “System Dynamics” or “Algebraic Geometry”. People who know such subjects can be regarded as having specific knowledge for those subjects. However, knowledge of higher quality can be termed as expertise (Becerra-Fernandez and Sabherwal, 2015). There are three types of expertise – associational expertise, motor skills, and theoretical expertise. KM can be defined as “doing what is needed to get the most out of knowledge resources”. Peter Drucker may be considered as the father of KM. The need for KM may also indicate the benefits of it. These benefits include the following: • • • • • •

To gain business competencies. To promote innovation. To minimize the delivery time to customer. Producing and supplying products with desired quality. Maximizing organizational commitment. To gain sustainable competitive advantage.

However, KM involves knowledge which is both internal as well as external to an organization. Internal knowledge includes the knowledge about the organization’s own processes, employees, machines, methods, documentations, intellectual property and all the related elements. External knowledge includes the knowledge about the organization’s external entities and processes. Gathering knowledge in business environment in today’s world has certain constraints, such as: • The complexity of knowledge domain has been increasing continuously because of many reasons, such as increasing and mountainous amount of information, increasing competition, increasing complexity of relevant processes, frequently changing taste, and preferences of the customers. • Because of frequently changing taste and preferences of the customers leading to increasing volatility of the market is creating more pressure and demand for KM. • Increasing demand for faster responsiveness to the customers, to changing business scenario, and the need for faster decision-making. • Voluntary or involuntary employee turnover. KM is especially required for those organizations for which turnover rate is very high or the organizations which are facing the downsizing (Becerra-Fernandez and Sabherwal, 2015). KM encompasses a total of four aspects, such as, knowledge capturing system, knowledge sharing system, knowledge discovery system, and knowledge application system. The modern technologies including the advanced information technology and smart technologies have made facilitated these aspects as well as increased further complexity. Advancement in information technology has immensely facilitated knowledge systems and on the other hand, smart technology demands new

Intelligent Decision Support System

267

and sophisticated methods for the knowledge systems. Therefore, AI and machine learning (ML) play an important role in KM. KM solutions are part of the entireKM. The different types of KM solutions involve three components – mechanism, technology, and infrastructure. KM mechanisms indicate different methods that are used to promote KM. the technology part basically indicate information technology or smart technology. The infrastructure forms the base for KM. The basic components of KM infrastructure include the following: • • • • •

Organization culture Organization structure IT infrastructure Physical environment Basic knowledge

KM mechanisms may include methods such as on-the-job training, learning by doing, learning by observation, face-to-face meeting (Becerra-Fernandez and Sabherwal, 2015). KM technology basically indicates information technology and application of AI. KM depends on four KM processes and seven sub-processes. The four main processes are: • • • •

Knowledge discovery Knowledge capturing Knowledge sharing Knowledge application

The seven sub-processes are: • • • • • • •

Combination (sub-process under knowledge discovery). Socialization (sub-process under knowledge discovery and knowledge sharing). Externalization (sub-process under knowledge capturing). Internalization (sub-process under knowledge capturing). Exchange (sub-process under knowledge sharing). Direction (sub-process under knowledge application). Routines (sub-process under knowledge application).

KM systems are the results of the integration of KM mechanisms and KM technologies. The effect of KM can be realized in an organization from four perspectives – people, process, products, and organizational performance. It affects learning ability of the employees, adaptability of employees, and employees’ job satisfaction. The learning ability makes impact on adaptability, which in turn makes impact on job satisfaction. KM also affects various processes in an organization, such as marketing, manufacturing, finance, human resource management, engineering, public relations, basically in terms of efficiency, effectiveness and innovation. Effectiveness is indicated through reduced mistakes and enhanced capability to adapt to changing environment. Process efficiency is realized through improving the productivity and the cost reduction. Process innovation can be made possible through brainstorming and welcoming and properly utilizing of the new ideas. Thus, advantages of KM in terms of its use can be delineated through the following points (Bergeron, 2003): • • • •

Minimization of mistakes. Increasing capability and willingness to adapt to changing environment. Productivity enhancement. Minimization of cost.

268

• • • • •

Decision Support System

Increasing number of value-added products. Increasing the knowledge-based products. Increasing return on investment. Economies of scale and scope. Sustainable competitive advantage.

A large number of technologies can be mentioned as the KM technologies. However, the most important among these technologies include the following. Each of these technologies demand separate detailed discussion, which is beyond the scope of this chapter: • • • • • •

Artificial Intelligence (AI) Rule-based systems Case-based reasoning (CBR) Constraint-based reasoning Model-based reasoning Diagrammatic reasoning

Before the application of KM, knowledge must be captured. “Knowledge capture systems support the process of eliciting either explicit or tacit knowledge that may reside in people, artifacts, or organizational entities” (Becerra-Fernandez and Sabherwal, 2015). Among the different types of knowledge as mentioned before in this section, tacit knowledge can be captured through various management actions, interactions among the employees of an organization, different intra- and inter-organizational events, and different informal communications throughout the organization and so on. Storytelling is a very important part of tacit knowledge capturing. Capturing tacit knowledge has attracted significant attention from both researchers and practitioners as this type of knowledge can convey very vital knowledge which is/are not possible to gather in any other fashion. Knowledge capturing may be accomplished by applying different knowledge capturing tools. Some of these tools are shown below. Among these tools, concept map and semantic network are explained in brief afterward. Explaining context-based reasoning will be large and thus not being explained in this book: • Concept map • Semantic network • Context-based reasoning

14.3.1  Concept Map Concept map can be defined as a brainstorming tool with a purpose. Concept map is basically a graphical tool. A concept map can depict a concept using arrows and linking words in order to show how different ideas for a particular concept are related with one another. Concept map is generally used to represent and understand complex concept. The basic steps of drawing concept map are delineated below:

1. Choose a drawing medium. 2. Select a concept to draw. 3. Identify the related concepts. 4. Organize shapes and lines. 5. Give finishing touch to the map.

Lucidchart is very common software to draw concept chart. The different components of concept chart are – concepts; linking words or phrases; propositional structure; hierarchical structure;

Intelligent Decision Support System

269

focus questions; and cross-links. Concept map is capable to depict a complex concept in lucid style. The overall benefits of a concept map include the following: • • • • • • •

Helps to visualize the overall concept. Inspires brainstorming and higher level thinking and innovation. Generates new concepts and solutions to complex problems. Complex concepts can be represented in lucid style. Promotes collaborative learning. Inspires creativity. Identifies areas which need further exploration and investigation.

An example of concept map is shown in Figure 14.1. Figure 14.1 shows that the subject “System Dynamics” can be learnt from the book by J. J. Forrester or from the online material or from the user manual. The book by J. J. Forrester can be searched in National Library (based on the developer’s convenience) or can be purchased from the Amazon website or can be collected from a friend. The online material can either be obtained from journal databases or can be searched in Google scholar search engine. The user manual can be obtained either from STELLA or from DYNAMO software. DYNAMO software is installed in departmental library of the organization and in the university library. Thus, the entire concept or idea for obtaining System Dynamics is depicted in Figure 14.1.

14.3.2  Semantic Network Semantic network or frame network shows the relationships among different concepts in a network. The network can either be directed or be undirected. The vertices in the network represent the concepts and the edges represent the relationships among the concepts. Semantic networks are frequently used in natural language processing applications. Semantic network is suitable for knowledge which can be understood through a set of inter-related concepts. An example of semantic network is shown in Figure 14.2. Figure 14.2 can be explained by the following sentences: • • • • •

Dog is a mammal. Dog has four legs. Dog has one tail. Bulldog is a dog breed. German Shepherd is a dog breed.

FIGURE 14.1  Example of concept map.

270

Decision Support System

FIGURE 14.2  Example of semantic network.

• • • • •

Siberian Husky is a dog breed. Bulldogs are sociable dogs. German Shepherds are courageous dog. Siberian Husky dogs are intelligent and friendly. ‘Sociable’, ‘courageous’, ‘intelligent, and friendly’ are the characteristics for the respective dog breeds.

However, after this brief overview of KM concepts, the next section discusses AI.

14.4  ARTIFICIAL INTELLIGENCE Artificial Intelligence can be defined as applying human intelligence in machines and software systems. There are many components of AI, such as intelligence quotient, reasoning, learning, problem-solving, perception, and using language. However, the intelligence for computer can be logical intelligence, probabilistic intelligence, emergent intelligence, neural intelligence, or language understanding (Neapolitan and Jiang, 2018). The field of AI includes several concepts such as, machine learning, natural language processing, deep learning, expert system, artificial general intelligence, speech recognition, problem-solving, fuzzy logic, cognitive computing, and nature-based techniques. Searle (1980) said that AI can be strong or weak. Searle commented on Strong AI saying that – “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states”. “The proposition that computers could appear and behave intelligently, but not necessarily understand, is called weak AI” (Neapolitan and Jiang, 2018). This section discusses different components and aspects of AI in brief. The following section discusses propositional logic as a part of logical intelligence. The subsequent section is going to discuss different naturebased algorithms.

271

Intelligent Decision Support System

14.4.1  Propositional Logic Although propositional logic has already defined and described in details in this book, but in this chapter, propositional logic is re-mentioned briefly as a technique under AI so that, if the reader skips reading the chapter on propositional logic, then also the reader comes to understand that it comes under the periphery of AI. thus, different sets of examples are provided in this chapter. Propositional logic provides rules that give mathematics, direct real-world meaning. A proposition is a statement that can either be true or false (Rosen, 1998). For example, consider the statement “Tokyo is the capital of Japan”. This is a fact or statement and thus a proposition. A statement which is neither true nor false cannot be a proposition. Besides the facts, the other statements such as interrogative sentence cannot be proposition. Such statement can be represented by a symbol, such as the statement “Tokyo is the capital of Japan” can be taken to be the value of a symbol p. The negation of a proposition can be represented by ¬P. Representation of such proposition in mathematical form can be facilitated by truth tables, which are used in binary mathematics. The basic truth tables can be for the Binary operations, AND, OR, and NOT. These truth tables are shown in Tables 14.1–14.3. Tables 14.1 and 14.2 show binary operations between two operands, P and Q. The result is indicated by the values of R. The AND binary operation in Table 14.1 shows that the result will be true (represented by value ‘1’) iff both the inputs are true. Otherwise, the result is false (represented by value ‘0’).

TABLE 14.1 Truth Table for AND Operation P

Q

R = P ∧ Q

0 0 1 1

0 1 0 1

0 0 0 1

TABLE 14.2 Truth Table for OR Operation P 0 0 1 1

Q 0 1 0 1

R = P ∨ Q 0 1 1 1

TABLE 14.3 Truth Table for NOT Operation P 0 1

R = ¬ P 1 0

272

Decision Support System

The OR operation in Table 14.2 shows that the result can be false iff both the inputs are false. Otherwise, the result is true. Table 14.3 shows the results for the negation operator, which is a unary operator. It shows that the output value will be the opposite of the input value. AND and OR operations are called conjunction and disjunction operators. In addition to the above three truth tables, there are other operations – XOR operation, implication, and equivalence operations. The respective truth tables are shown in Tables 14.4–14.6 respectively. Table 14.4 shows that the results of XOR operation are true for only dissimilar inputs. This is just opposite to equivalence operation whose truth table is given in Table 14.6. Table 14.6 shows that the results are false for dissimilar input values, otherwise the results are true. Table 14.5, truth table for implication, shows that the result is false only when the first input is true and the second input is false. The meaning of the implication operator can be understood by expression (14.1). As the name suggests, the implication operator P → Q indicates that “P implies Q” or “if P then Q”. The equivalence operation P ⇔ Q indicates P → Q and Q → P and “P if and only if Q”. There are some other implications in addition to P → Q. These are: • The proposition Q → P is called the converse of the proposition P → Q. • The proposition ¬Q → ¬P is called the contrapositive of the proposition P → Q. P → Q = ¬P ∨ Q (14.1)



The above concepts can be used to convert normal English sentences to symbolic form. Some examples are given below. TABLE 14.4 Truth Table for XOR Operation P 0 0 1 1

Q 0 1 0 1

R = P ⊕ Q 0 1 1 0

TABLE 14.5 Truth Table for Implication Operation P

Q

R = P → Q

0 0 1 1

0 1 0 1

1 1 0 1

TABLE 14.6 Truth Table for Equivalence Operation P

Q

R = P ⇔ Q

0 0 1 1

0 1 0 1

1 0 0 1

273

Intelligent Decision Support System

Example 1: Tomorrow will be a rainy day only if temperature is lower and West side of the sky is cloudy and East side of the sky is not cloudy. Consider the following symbols for this example. P: Tomorrow will be a rainy day. Q: Temperature is lower. R: West side of the sky is cloudy. S: East side of the sky is cloudy. Thus, the above example can be translated as follows: Tomorrow will be a rainy day → (Temperature is lower ∧ West side of the sky is cloudy ∧ ¬ East side of the sky is cloudy) P → (Q ∧ R ∧ ¬S ) (14.2)



Example 2: Learning Operations Research will be easier if Learner knows Matrix Algebra and Calculus. Consider the following symbols for this example. P: Learning Operations Research will be easier. Q: Learner knows Matrix Algebra. R: Learner knows Calculus. Thus, the above example can be translated as follows. (Learner knows Matrix Algebra ∧ Learner knows Calculus) → Learning Operations Research will be easier. (Q ∧ R) → P (14.3)



Truth tables for the logical expressions as shown above (in expressions (14.1–14.3)) can be constructed to establish the truthfulness of the overall expression. For example, the truth tables for expressions (14.2) and (14.3) are shown in Tables 14.7 and 14.8 respectively. Since all the results of expression (Q ∧ R) → P are true, the expression (Q ∧ R) → P is called Tautology. If all the results of an expression are found to be false then that expression is called Contradiction. An expression which is neither a Tautology nor a Contradiction, is known as Contingency. Thus, the expression P → (Q ∧ R ∧ ¬S ) whose truth table is shown in Table 14.8 is a Contingency since the results are neither all true nor, all false, that is, neither a Tautology nor a Contradiction.

TABLE 14.7 Truth Table for ( Q ∧ R) → P P

Q

R

( Q ∧ R)

Result = ( Q ∧ R) → P

0 0 0 0 1 1 1 1

0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1

0 0 0 0 0 0 0 1

1 1 1 1 1 1 1 1

274

Decision Support System

TABLE 14.8 Truth Table for P → ( Q ∧ R ∧ ¬ S) P

Q

R

S

¬S

( Q ∧ R ∧ ¬S )

P → (Q ∧ R ∧ ¬ S)

0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1

0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1

0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1

0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0

0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0

1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 0

If the equivalence P ⇔ Q is a Tautology, then the propositions P and Q are said to be logically equivalent. Besides, other logical expressions can also be logically equivalent. Examples 3 and 4 show the examples of the equivalence of ¬( P ∨ Q) with ¬P ∧ ¬Q and ¬( P ∨ (¬P ∧ Q)) with ¬P ∧ ¬Q.

Example 3: Establishing the equivalence of expressions ¬( P ∨ Q) with ¬P ∧ ¬Q with the help of truth table. The truth table for the respective logical expressions in Example 3 is provided in Table 14.9. Table 14.9 shows that the truth values for ¬( P ∨ Q) and ¬P ∧ ¬Q are same which proves that the logical expressions ¬( P ∨ Q) and ¬P ∧ ¬Q are equivalent.

Example 4: Establishing the equivalence of expressions ¬( P ∨ (¬P ∧ Q)) with ¬P ∧ ¬Q with the help of truth table. The truth table for the respective logical expressions in Example 4 is provided in Table 14.10. Table 14.10 shows that the truth values for ¬( P ∨ (¬P ∧ Q)) and ¬P ∧ ¬Q are same which proves that the logical expressions ¬( P ∨ (¬P ∧ Q)) and ¬P ∧ ¬Q are equivalent.

TABLE 14.9 Equivalence of ¬( P ∨ Q) with ¬ P ∧ ¬Q P

Q

( P ∨ Q)

¬ ( P ∨ Q)

¬P

¬Q

¬ P ∧ ¬Q

0 0 1 1

0 1 0 1

0 1 1 1

1 0 0 0

1 1 0 0

1 0 1 0

1 0 0 0

275

Intelligent Decision Support System

TABLE 14.10 Equivalence of Expressions ¬( P ∨ ( ¬ P ∧ Q)) with ¬ P ∧ ¬ Q P

Q

¬P

¬Q

¬P ∧ Q

P ∨ ( ¬P ∧ Q)

¬ ( P ∨ ( ¬ P ∧ Q))

0 0 1 1

0 1 0 1

1 1 0 0

1 0 1 0

0 1 0 0

0 1 1 1

1 0 0 0

¬ P∧ ¬ Q 1 0 0 0

TABLE 14.11 Logical Equivalences Equivalence Expressions

Laws

P∧T ⇔ P P∨F ⇔P

Identity Laws

P∨T ⇔T P∧F ⇔F

Domination Laws

P∨P⇔P P∧P⇔P

Idempotent Laws

¬(¬P) = P P ∨Q ⇔ Q ∨ P P ∧Q ⇔ Q ∧ P

Double Negation Law

( P ∨ Q) ∨ R ⇔ P ∨ (Q ∨ R) ( P ∧ Q) ∧ R ⇔ P ∧ (Q ∧ R)

Associative Laws

P ∨ (Q ∧ R) ⇔ ( P ∨ Q) ∧ ( P ∨ R) P ∧ (Q ∨ R) ⇔ ( P ∧ Q) ∨ ( P ∧ R)

Distributive Laws

¬( P ∨ Q) ⇔ ¬P ∧ ¬Q ¬( P ∧ Q) ⇔ ¬P ∨ ¬Q

De Morgan’s Laws

Commutative Laws

The different logical equivalences or rules or laws are shown in Table 14.11. These laws are used to establish the logical equivalences algebraically. Logical equivalence by algebraic methods is not explained here. However, the above discussion shows that proposition logic can be used to express natural English sentences in symbolic form. The next subsection discusses nature-based algorithms.

14.4.2  Nature–Based Optimization Techniques Nature-based optimization techniques are based on different nature or physical phenomena. These techniques are known as inexact optimization techniques because of an interesting characteristic of these techniques. The characteristic is that the application of these techniques cannot guarantee optimality on one hand and on the other hand, no one has been able to violate the optimality of these techniques. Such self-contradictory issue of these techniques is both the advantage and disadvantage of these techniques. These techniques are only applicable when the application of the existing mathematical techniques cannot solve a complex problem. These techniques are called

276

Decision Support System

meta-heuristics because of the nature of these techniques. Each of these techniques provides a set of solutions instead of a single solution. The set of solutions is known as Pareto optimal solutions which provide a set of equally preferable solutions. The existing literature shows very large number of such techniques. As parts of AI, some of these techniques are describe in brief in this section in different subsections. Some major or benchmark nature-based techniques have been depicted in the work of Bandyopadhyay and Bhattacharya (2013). Since uncertainty is an inherent part of these techniques, all these techniques apply probability values in order to represent uncertainty in realworld scenario. Most of these techniques are applicable to problems with single objective problems (Single-Objective Problem) as well as multiple objective problems (Multi-Objective Problem). 14.4.2.1  Genetic Algorithm The most popular among all nature-based techniques is genetic algorithm (GA) as evident from the existing literature, keeping in view of the number of research studies on GA as compared with the applications of other nature-based techniques. GA is based on genetic reproduction among animals. Genetic reproduction is the process of generating offsprings. There are three main sub-processes of GA – selection of mating partner, transfer of genetic material, and mutation which is the key for evolution in animal kingdom. GA was proposed by Goldberg (1989) and later, GA was modified numerous times as evident from the existing literature. as observed in the animal kingdom, the selection operator selects the mating partner from among many candidates; crossover operator imitates the biological crossover in terms of exchanging values (like, genetic material) between two chromosomes; mutation operator randomly changes the gene values in order to increase the variety, a key to evolution (or more variety in solutions). The application of GA has been solving many complex problems for decades. The basic steps of GA are listed out below. At the end, a set of Pareto optimal solutions is obtained. In the algorithm, a fitness function is mentioned. The user of GA is supposed to decide over the type and expression for this fitness function based on the nature of the problem. In each iteration, either the crossover operation or the mutation operation is performed based on a certain generated probability value. The entire procedure continues till the pre-decided maximum number of generations. The size of the population is also pre-decided based on the structure of the chromosome, which determines the variety in solution and the requirement of the problem under study. Some of the multi-objective version of GA include Nondominated Sorting Genetic Algorithm (NSGA) (Srinivas and Deb, 1994), Strength Pareto Evolutionary Algorithm (SPEA) (Zitzler and Thiele, 1999), Niched Pareto Genetic Algorithm (NPGA) (Horn et al., 1994), and Pareto Archived Evolution Strategy (PAES) (Knowles and Corne, 2000). 14.4.2.1.1  Steps of Genetic Algorithm 1. Generate initial population of chromosomes of a particular size N. Each chromosome is composed of genes of certain characteristics. 2. Decide over and calculate the values of fitness function for each chromosome in the population. 3. Generate a probability value randomly. 4. For generation I = 1 to maxgen (maximum generation number), perform step 4 to step 10. 5. Select mating partners randomly or based on certain condition. 6. Perform crossover operation if the randomly generated probability value is above a prementioned threshold value. Perform crossover operation between each of the selected mating partners in order to generate offsprings (chromosome of gene values that are expected to be different from those of parents). 7. Perform mutation to randomly selected chromosome if the probability value is below the pre-mentioned threshold value. 8. Evaluate fitness function for each of the produced offsprings. 9. Combine the parent population with the offspring population. 10. Choose the better chromosomes in number equal to the size of the population.

Intelligent Decision Support System

277

14.4.2.2  Particle Swarm Optimization Particle swarm optimization (PSO) imitates group behavior or swarm behavior of different insects which live in colonies, such as bees, wasps, termites, and ants. Among these insects, ants have attracted special attention which is shown in the next subsection. Different swarm behaviors have been simulated in swarm optimization. Among different behaviors, particular attention of the researchers are drawn especially to the behavior related to the foraging for food, division of labor among the insects in a colony, task allocation among different members of the group, communication and networking among them, cooperative transportation, and building of nests. In PSO, a particular solution represents a particle and the population of solutions represents the swarm. The positions of the particles are updated with velocity of the particle, and the velocity itself is also updated. Suppose, positionijt is the position of the i − th particle at time t or iteration t. velocityijt is the velocity of the i − th particle at time t or iteration t. The position of the particle is modified by expression (14.4). The velocity is modified by expression (14.5). Here, c1 and c2 are known as cognitive and social parameters respectively; r1 and r2 are uniformly distributed random number between 0 and 1; pijt and pkjt are the best solution in terms position and best particle or leader, respectively. The steps of PSO are delineated next.



position tij+1 = position tij + velocitytij+1 (14.4)



velocity tij+1 = wvelocity tij + c1r1 ( pijt − position tij ) + c2r2 ( pkjt − positiontij ) (14.5)

14.4.2.2.1  Steps of Particle Swarm Optimization 1. Generate initial population of particles by using random numbers and set a stopping criterion. 2. Initialize cognitive and social parameters, c1 and c2. 3. Start with an initial position. 4. Perform step 5 to step 8 until the stopping criterion is satisfied. 5. Calculate the fitness of all the particles in the population with a pre-decided fitness function. 6. Update the best solution in terms of position and the best solution or leader. 7. Update the position of the particles. 8. Update the velocity of the particles. 14.4.2.3  Ant Colony Optimization (ACO) In one sense, ant colony optimization (ACO) is a type of swarm optimization since it is based on the colonial behavior of ants. Different algorithms have been developed based on the different behaviors of ants. Among all the insects, the behavior of ants is easier to study and very interesting as well. The colony of ants is generally located underground where ants live, eat, and mate. Ants generally secrete a substance called pheromone in order to mark their routes. All ants follow this trail of pheromone. Such trail is very useful for searching food, taking a heavy food altogether to their underground colony, for communication among different members of the swarm and so on. Ants follow signaling communication of communicating among themselves. A shortest path algorithm had been developed following the pheromone trail of ants. It can be observed that over time, ants start to follow the shortest route to their food if there is any obstacle in between. Various other algorithms had also been developed simulating different behaviors of ants, such as, the development of an Ant-Q algorithm by the hybridization of ant system and Q-learning (Gambardella and Dorigo, 1995).

278

Decision Support System

14.4.2.4  Artificial Immune Algorithm (AIA) Artificial immune algorithm (AIA) is based on the immunity system in Vertebrates. The basic idea is that, when any external harmful protein body enters the body of a Vertebrate, then the immunity system of the body automatically generates another protein body that fights against the external harmful body to destroy it. The external harmful protein body and the internal one are known as antigen and antibody, respectively. This process is simulated in AIA in which the antigen represents the worse solution and antibody represents better solution. The algorithm of AIA is shown below. 14.4.2.4.1  Artificial Immune Algorithm 1. Take a population of antibodies. 2. Perform step 3 to step 10 till the maximum number of generations. 3. Take a population of antigens. 4. Select antigen g from population of antigens. 5. Select A antibodies from the population of antibodies. 6. For each antibody a ∈ A belonging to A, perform the step 4 to step 5 for antibodies in the antibody population. 7. Match antibody a with antigen g. 8. Calculate Hamming Distance in order to calculate the matching score. 9. Identify antibody with highest matching score. If there is a tie, then break the tie randomly. 10. Marching score of the winner antibody is added to its fitness value. 14.4.2.5  Differential Evolution (DE) Differential evolution (DE) as proposed by Storn and Price (1995) is based on the differences among individuals for mutation. The basic difference between DE and GA is that DE uses directional information in the population by the use of target and unit vector. Thus, the solutions of DE converge faster as compared to GA. This results in insufficient exploration of the solution space. Like GE, better offsprings replace the worse offsprings. Thus, the main operators as in GE are selection, crossover, and mutation. The algorithm of DE is shown below. 14.4.2.5.1  Differential Evolution 1. Generate an initial population of chromosomes randomly. 2. Evaluate fitness values for each chromosome in the population. 3. Repeat step 4 to step 10 till the maximum number of generations. 4. Select three individuals randomly. 5. Perform crossover among the selected chromosomes using DE crossover operation. 6. Perform mutation. 7. Evaluate fitness values of the offsprings. 8. If offspring is better than parent then replace the parent with the better offspring. 9. Combine offspring population with the parent population. 10. Choose the better chromosomes from the combined population to fill the population size. 11. Find the best individuals in the population. 12. Add these best chromosomes to a secondary population. 14.4.2.6  Simulated Annealing Simulated annealing (SA) as proposed by Kirkpatrick et al. (1983) is based on the physical annealing process in which metals are alternatively heated and cooled in order to get rid of the brittleness. The temperature is raised up to a higher level or higher energy state followed by lowering the temperature to a lower level or lower energy state. “SA is actually an adaptation of Metropolis-Hastings

Intelligent Decision Support System

279

Algorithm (MHA) which is a Monte Carlo method used to generate sample states of a thermodynamic system” (Bandyopadhyay and Bhattacharya, 2013). The algorithm of SA is ­provided below. 14.4.2.6.1  Simulated Annealing 1. Generate an initial solution x randomly. 2. Repeat step 3 to step 6 till the maximum number of generations. 3. Repeat step 4 to step 5 until the stopping criterion is satisfied. 4. Generate another candidate solution y randomly based on a pre-decided neighborhood structure and the currently chosen solution x. 5. If y is not better than x then. Generate another solution r in the range (0,1). Calculate p = e(( − ( f ( y )− f ( x ))/ t ) If r  p then Perform global pollination with levy flight Else Perform local pollination Endif 7. Identify the new best value from the modified population 8. Current solution set is the current optimum solution set. 14.4.2.27  Glowworm Swarm Based Optimization Glowworm Swarm Based Optimization as proposed by Krishnanand and Ghose (2006) is based on the mating behavior of glowworm. Glowworm “produces natural light that is used as a signal to attract a mate. Luciferin is one of the several components that are involved in a chemical reaction responsible for producing the bioluminescent light. This light is also used to attract prey. General idea in the glowworm algorithm is similar in the sense that glowworm agents are attracted to other glowworm agents that have brighter luminescence” (Krishnanand and Ghose, 2006). The Glowworm Swarm Based Optimization algorithm is depicted through the following points. 14.4.2.27.1  Glowworm Swarm Based Optimization 1. Generate a population of glowworms. 2. Repeat from step 3 to step 8 until the stopping condition is satisfied. 3. Calculate the fitness of each individual in the population through the objective function. 4. Update Luciferin by a pre-specified rule. 5. Evaluate glowworm and find neighboring group. 6. Calculate the required probability. 7. Use the generated probability to update the movement of glowworms. 8. Update decision radius using a pre-decided rule. 9. Output the current solution set as the optimum solution set. 10. If the irradiance change then Go to step 2 Else Go to step 9 Endif 14.4.2.28  Grasshopper Optimization Algorithm (GOA) Grasshopper Optimization Algorithm (GOA) as proposed by Saremi et al. (2017) is based on the behavior of grasshopper swarm. Although grasshoppers are seen as individuals but they can form swarms with significant number of grasshoppers. The grasshopper swarm generally destroys crops and thus, is considered to be a kind of pest. Both the adults and the nymphs can form swarms. Large number of nymphs can consume fields after fields of vegetation. When they grow to adult stage, they form swarm in air and this swarm can attack different crop fields and can consume the crops. The basic characteristics of concern for the algorithm are the movement of the grasshoppers and their nymphs. Individual grasshopper moves in small steps whereas long-range movement is observed for the swarm. The respective algorithm based on grasshopper behavior is depicted below.

290

Decision Support System

14.4.2.28.1  Grasshopper Optimization Algorithm (GOA) 1. Generate population of grasshoppers randomly. 2. Initialize different parameters such as position of the grasshopper, social interaction among the grasshoppers, gravitational force on the grasshoppers, etc. 3. Evaluate fitness of each individual in the population. 4. Update the position of each individual by a pre-decided expression. 5. If the number iteration is greater than population size then If the best position has been obtained then Modify the old best position by the new best position Else Go to step 3 Endif Else Go to step 3 Endif 6. Current solution set is the optimum solution set. 14.4.2.29  Grey Wolf Optimization (GWO) Grey Wolf Optimization (GWO) as proposed by Mirjalili et al. (2014) is based on the hunting mechanism and leadership hierarchy of grey wolf. Grey wolves generally live in pack with the total 5–12 wolves per pack. In the strong social hierarchy of the grey wolves, the pack is led by a male and a female. The leaders make all the decisions for the pack such as the decisions about hunting, deciding the place of sleeping, decide over the time to wake up and so on. The other wolves in the pack accept their leadership by keeping their tail down. The leaders may not be the strongest in the pack but they are the best decision-makers. This establishes the fact that organization is more important than the strength. The subordinate wolves help the leader in their decision-making. The subordinate can be either male or female but he/she is capable to replace any of the leaders. Subordinate wolf obeys the leaders and commands the other lower hierarchy wolves. The lowest ranked wolves in the pack are the weakest and they are the last to be allowed to share the hunted and killed prey. Leaders were termed as alpha, the subordinate was termed as beta and the lowest ranked wolves were termed as omega. The respective algorithm are depicted through the following points. In the algorithm the best solution is alpha which is following by beta and delta. All the others are omega. 14.4.2.29.1  Grey Wolf Optimization (GWO) Algorithm 1. Generate a population of wolves randomly. 2. Initialize the population parameters. 3. Evaluate the fitness function for alpha, beta, delta and omega in the population. 4. Repeat from step 5 to step 8 until the stopping condition is satisfied. 5. Update the locations of individuals. 6. Update the parameters. 7. Again, evaluate the fitness of alpha, beta, and delta. 8. Again update the parameters. 9. Current solution is the best solution set. 14.4.2.30  Krill Herd Algorithm (KHA) Krill Herd Algorithm (KHA) as proposed by Gandomi and Alavi (2016) is based on the behavior of krill fish herd. It has been observed that the motion of the krill herd depends on the density of the herd, that is, the number of krills in the herd and the distance of the food source from the herd. Three types of krill motions have been simulated in the algorithm – movement of the krill individuals in the krill herd; motion due to foraging of food; random diffusion. The krill herd algorithm is shown below through the following points.

Intelligent Decision Support System

291

14.4.2.30.1  Krill Herd Algorithm (KHA) 1. Generate population of krills randomly. 2. Repeat from step 3 to step 8 until the stopping condition is satisfied. 3. Evaluate fitness for each individual in the population. 4. Apply motion induced by individuals. 5. Apply foraging motion. 6. Apply physical diffusion. 7. Apply genetic operators to the individuals in the population. 8. Update the position of the krill individuals. 9. Current population of solutions is the optimal solution set.

14.4.2.31  Lion Optimization Algorithm Lion Optimization Algorithm (LOA) as proposed by Yazdani and Jolai (2016) is based on the special lifestyle of lions and their cooperative behavior. Lions can be resident or nomad. A family of resident lions is called a pride. Resident lions live in groups with each group containing bone or more males, five females and their cubs. When the male cubs get maturity then they are not regarded as part of their pride anymore. Lions can also move and live individually or in pairs, called nomad lions. Such lions or pairs of lions may be excluded from their original pride. However, because of any need for food or security issues, lions may shift from nomadic lifestyle to pride lifestyle or vice-versa. Lion Optimization Algorithm is depicted below. 14.4.2.31.1  Lion Optimization Algorithm (LOA) 1. Generate the population of lions randomly. 2. Get selected number of prides from resident lions. 3. For each pride, perform from step 4 to step 11 until the stopping condition is dissatisfied. 4. Evaluate fitness value for each individual in the population. 5. Apply tournament selection from the pride. 6. Perform crossover between the selected lions. 7. Perform mutation. 8. Evaluate fitness values for the offsprings. 9. Sort the offsprings based on their fitness. 10. If the fitness of all offsprings are not acceptable then Exclude or kill the weak offsprings Endif 11. Perform territorial defense and takeover. 12. Current population is the optimal solution set. 14.4.2.32  Migratory Birds Optimization (MBO) Migratory Birds Optimization (MBO) as proposed by Duman et al. (2011) is based on the “V formation flight of the migratory birds”. The top air of the airfoil (wings of a bird) moves faster than the bottom air. The pressure in the top of the wings is lower than the pressure at the bottom of the wings. Because of the pressure and to move over long distance, migratory birds fly in specific formation. V formation is the most common formation for migratory birds while flying in flock. V formation indicates that birds fly in such a way that looks like the letter ‘V’. in the V formation, one bird leads the flock followed by two lines of birds. Some other formations include column formation, J-shaped formation, bow-shaped formation. V formation makes it possible to avoid collision among the birds and to save physical energy of the birds. This theory is simulated through Migratory Birds Optimization as shown below.

292

Decision Support System

14.4.2.32.1  Migratory Birds Optimization 1. Generate initial population of migratory birds in V formation. 2. Repeat from step 3 to step 7 until the stopping condition is satisfied. 3. Repeat from step 3 to step 5 for pre-decided number of times. 4. Improve the leader solution by generating specific number of neighboring solutions. 5. Generate other neighboring solutions, other than the leader. 6. Apply neighbor sharing. 7. Replace the leader. 8. The current population of solutions is the optimal solution set. 14.4.2.33  Moth-Flame Optimization Algorithm Moth-Flame Optimization Algorithm as proposed by Nadimi-Shahraki et al. (2021) is based on the spiral movement around a light source. “This behavior is derived from the navigation mechanism of moths that is used to fly a long distance in a straight line by maintaining a fixed inclination to the moon”. However, it becomes deadly if the light source is very close. The moth-flame optimization algorithm is shown below. 14.4.2.33.1  Moth-Flame Optimization Algorithm 1. Generate the population of moths randomly. 2. Initialize required parameters of algorithm. 3. Repeat from step 4 to step 9 until the stopping condition is satisfied. 4. Calculate the error for each moth individual. 5. Sort and assign flame. 6. Update flame number using a pre-decided expression. 7. Calculate the distance between moths and flame. 8. Update the parameter of the algorithm. 9. Update the position of each moth using a pre-decided expression. 10. Current solution set is the optimal solution set. 14.4.2.34  Mouth-Brooding Fish Algorithm Mouth-brooding fish algorithm (MBFA) as proposed by Jahani and Chizari (2018) is based on the nature of mouth-brooding fish and symbiotic interaction strategies. In case of many sea animals, small number of offsprings survives in the end. Thus, the animals try to protect their offsprings until they are matured and can protect themselves and try to survive independently. Mouthbrooding fishes protect their offsprings by keeping them in their mouth for giving shelter to them. Examples of such fishes include fishes like sea catfish, pikeheads, jawfish, Bagrid catfish, and arowanas. This type of fish keeps only the eggs but the even after they take birth. When they grow up, the mouth of this fish cannot hold all the offsprings. Then, some of the offsprings are released from the mouth to fight along with the natural world and nature. The respective algorithm is shown below. The algorithm has five parameters – population size, source point of mother, amount of dispersion, probability of dispersion, and mother’s source point damping. The algorithm is based on the movement of the fishes. 14.4.2.34.1  Mouth-Brooding Fish Algorithm 1. Generate population of fishes. 2. Female lay eggs and swim over it in a cave. 3. Male spray milt to fertilize the eggs. 4. Collection of eggs by female in her mouth. 5. Female release the eggs and male leaves the cave. 6. Male fertilize the eggs (crossover).

Intelligent Decision Support System



293

7. Perform mutation. 8. Female collects offsprings in mouth till they are fit.

14.4.2.35  Polar Bear Optimization Algorithm Polar Bear Optimization Algorithm (PBOA) as proposed by Polap and Woźniak (2017) is based on the hunting strategy of polar bears in harsh weather conditions. They live in harsh frosty environment in which it is very difficult to survive. Such harsh condition also favors the polar bear to hunt not only over ice over large area but also in water. Their white fur helps them in camouflage in ice and snow. Polar bear jumps on ice and drifts to the location where it will hunt. The prey is surrounded at first and then the bear attacks the prey from the best position for convenience. The respective algorithm is shown below. 14.4.2.35.1  Polar Bear Optimization Algorithm 1. Generate the population at random. 2. Repeat from step 3 to step 7 until the stopping condition is satisfied. 3. Search for possible seal colony (better solution). 4. Drift over ice to seal colony. 5. Get close to prey. 6. Surround the prey. 7. Kill and consume the prey. 14.4.2.36  Whale Optimization Algorithm (WOA) Whale Optimization Algorithm (WOA) as proposed by Mirjalili and Lewis (2016) is based on the social behavior of humpback whale. Besides, humpback whale, there are many other species of whale such as, killer whale, finback whale, blue whale, Minke whale. Whales are very intelligent and emotional predator whose part of brain sleeps during the sleeping time. Part of whale brain is responsible for judgment, emotion and various social behaviors. Therefore, whales can think, judge, communicate and socialize with other whales. Whales live either in groups or alone. Humpback whales are very big in size and live on krill and other fish herds close to surface of ocean. The focus of the respective algorithm is their hunting mechanism. The hunting strategy includes creating bubbles in different trails such as spiral shaped bubbles, coral loop bubble, capture loop bubble, or lobtail-shaped bubble. Bubble-net feeding mechanism is the special characteristic of humpback whale. The respective algorithm is shown below. Whale Optimization Algorithm (WOA) 1. Generate the initial population of whales 2. Evaluate the fitness of each individual in the population 3. Repeat from step 4 to step 7 until the stopping condition is satisfied 4. Find the best individual based on the fitness value 5. Randomly generate a probability value p If p is not less than a threshold value then Update the position of individual using a –pre-defined expression Else If a generated number is less than 1 then Update the position another expression Else Update the position by a third expression Endif Endif 6. If the control variables are not within limits then Penalize

294



Decision Support System

Else Endif 7. Evaluate the fitness of each individual 8. Current population is optimal set of solutions.

14.4.2.37  Sea Lion Optimization Algorithm (SLOA) Sea Lion Optimization Algorithm (SLOA) as proposed by Masadeh et al. (2019) is based on the hunting behavior sea lions in nature. Sea lion is one of the most intelligent sea animals who live in colonies. Each group has a social hierarchy of sea lions. Navigation of sea lions is dependent on the age, sex and purpose of navigation. They have excellent ability to locate fish and act quickly to gather them toward shallow water near the shore and surface of ocean. They can open their eyes wide open to let the light enter in order to locate the prey easily. With their whiskers, they can follow the prey. Whiskers are used by sea lions in order to indicate the size, shape and position of the prey. Sea lions hunt in group so as to increase more chance to have greater number of prey. The hunting mechanism is the subject matter of the developed algorithm. The respective algorithm is given below. 14.4.2.37.1  Sea Lion Optimization Algorithm 1. Generate population of sea lions randomly. 2. Calculate the fitness of each individual in the population. 3. Repeat from step 4 to step 12 until the stopping condition is satisfied. 4. Identify the sea lion with best fitness. 5. Repeat from step 5 to step until the stopping condition is satisfied. 6. Identify a leader using a pre-specified expression. 7. If a generated probability is not less than a threshold probability value then go to step 7, else go to step 10. 8. Update the location of individual by a pre-specified expression. 9. Go to step 10. If the value of a certain parameter is less than 1 then Update location using a second expression Else Select a random individual Update location using a third expression Endif 11. Identify an individual and calculate the fitness. 12. If the individual is a leader then go to step 3. 13. Current population is the optimal solution set. 14.4.2.38  Tarantula Mating-Based Strategy (TMS) Tarantula Mating-Based Strategy (TMS) as proposed by Bandyopadhyay and Bhattacharya (2015) is based on the interesting mating behavior of Tarantula spider. The most interesting fact about the Tarantula mating is that the female spider may consume the male spider just after mating because of genetic purpose or for immediate need for food. This strategy had been applied for resource utilization and resource saving strategy in manufacturing scheduling by the authors. The proposed strategy had been compared for analogy with the Tarantula mating behavior as shown in Figure 14.3. There are some more nature-based optimization as proposed by different researchers in the world as evident from the existing literature. Some of these algorithms that are not being introduced in this section include the following: • Rat swarm optimization (Dhiman et al., 2021). • Red deer algorithm (Fathollahi-Fard et al., 2020). • Sailfish optimizer (Shadravan et al., 2019).

Intelligent Decision Support System

295

FIGURE 14.3  Analogy of Tarantula mating with the simulated strategy.

• • • • • • • •

Salp swarm algorithm (Mirjalili et al., 2017). Shark smell search algorithm (Abedinia et al., 2016). Social spider algorithm (James and Li, 2015). Spider monkey optimization (Bansal et al., 2014). Squirrel search algorithm (Jain et al., 2019). Sunflower optimization algorithm (Gomes et al., 2019). Intelligent water drops algorithm (Shah-Hosseini, 2009). Wind driven optimization algorithm (Bayraktar et al., 2010).

14.4.3  Some Latest Tools for Recent Applications of Artificial Intelligence This section discusses a few latest tools and techniques that are used in the latest applications of AI. We are basically living in a smart age. Thus, we are all interested in applying smart technologies starting from smart phone. Smart applications are kind of very advanced applications of AI. Therefore, the tools and components of smart applications should also be discussed in this chapter. This section is devoted to such tools and components, such as cloud computing, and big data. 14.4.3.1  Cloud Computing A cloud in atmosphere is a visible mass of condensed water. The content of the cloud is not clearly visible although we know that cloud contains water in the form of vapor in condensed form. This concept is also applicable to cloud computing. “Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet” (Sarna, 2011). Cloud computing has also been defined by National Institute for Standards and Technology (NIST), USA as – “cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. The cloud model of computing promotes availability”. A schematic diagram for cloud computing is shown in Figure 14.4. The companies which are using cloud computing widely include Google, Amazon, Microsoft, IBM and so on. The users are using cloud computing unknowingly nowadays since almost all the websites run on cloud. The basic characteristics of cloud computing can be summarized in the following points: • Cloud computing is a service-based application. • It is scalable and elastic. • Cloud computing is generally shared among different servers.

296

Decision Support System

FIGURE 14.4  Schematic diagram for cloud computing.

• • • • • • •

Cloud computing uses internet technologies widely. Cloud computing is an agile technology. Cloud computing applications result in reduction of in-house costs and investment. Cloud computing always applies the latest technology. Semantic interoperability. Low coupling. Modularity.

The infrastructure of cloud computing can be looked upon from the following aspects. These are different kinds of cloud models: • Infrastructure as a Service (IaaS): This is characterized by highest level of use by the consumers, although the consumers cannot manage or control the cloud infrastructure. • Platform as a Service (PaaS): This is the intermediate level. It indicates the applications either created or acquired by the consumers. consumers are not allowed to manage or control the cloud infrastructure. • Software as a Service (SaaS): Consumers are allowed to deploy and run arbitrary software. NIST suggests four models of cloud deployment. These are given below: • Private cloud: This type of cloud is managed by private companies or organizations. • Community cloud: This type of cloud is owned and managed by a group of organizations together. • Public cloud: This type of cloud is open to the general public for use. • Hybrid cloud: This is a hybridization of two or more different types of clouds such as ­private, community, or public. Integration of different types of clouds is required for different types of effective service. A very popular way of integrating private and public cloud (hybrid cloud) is to use Eucalyptus

297

Intelligent Decision Support System

(www.eucalyptus.com). Eucalyptus is a IaaS type, open-source software platform especially facilitating cloud computing applications. Besides Eucalyptus, there are many other software platforms as well, such as Red Hat Enterprise Linux (RHEL), Fedora, SUSE Linux Enterprise Server, and Debian. Such software technologies can use different types of virtualization technologies such as VMware, Xen, and KVM. Examples of cloud as provided by Eucalyptus include Ubuntu Enterprise Cloud (UEC). For Platform as a Service (PaaS) type of cloud, there are many organization that provide rent service, such as, Google, Microsoft, Sun, Amazon, IBM, and GoGrid. SaaS is a service that provides applications that can be shared among many. Some of the cloud service providers are shown in Table 14.12.

TABLE 14.12 Examples of Cloud Computing Service Providers Provider

Website

Kamatera

• https://www.kamatera. com/

Characteristics • Data centers are located in USA, Canada, UK, Hong Kong, Germany, Israel. • Cloud services provided – cloud servers, block storage, private networks, load balancers. • Server type: Dedicated, Burstable, General purpose.

Serverspace

• https://serverspace.io

• There are 13 data centers located over four continents. • Data centers are located in New Jersey, Amsterdam, Moscow, Almaty. • Cloud services provided – cloud servers, managed servers.

Linode

• https://www.linode.com/

• Server types – vStack, VMware, VPC (Virtual Private Cloud). • Data centers are located in New York, Singapore, London, Germany, Toronto, Sydney, Mumbai. • Cloud services provided – DDoS protection, Cloud Firewall, block storage, backups, VLAN.

Amazon • https://aws.amazon.com Web Services

• Server type – Dedicated CPU, shared CPU, GPU. • Data centers are located in New York, Singapore, London, Germany, Toronto, Sydney, Mumbai. • Cloud services provided – DDoS protection, Cloud Firewall, block storage, backups, VLAN.

HostPapa

• https://www.hostpapa.ca

• Server type – Amazon EC2. • Data centers are located in Los Angeles, UK, France, Germany, Toronto, India, Singapore etc. • Cloud services provided – DDoS Protection, Cloud Firewall, Block Storage, Backups, VLAN. • Server type – VPS Hosting, Web Hosting, Online Store. (Continued)

298

Decision Support System

TABLE 14.12 (Continued) Examples of Cloud Computing Service Providers Provider ScalaHosting

Website • https://www. scalahosting.com

Characteristics • Data centers are located in Dallas, Sofia, Bangalore, London, Singapore, Amsterdam, Toronto. • Cloud services provided – cloud servers, managed servers.

Cloudways

• https://www.cloudways. com/en/home.php

• Server type – Managed Cloud VPS, Self-managed Cloud VPS, Web Hosting. • Data centers are located in North and South America, Europe, Australia, East Asia, Southeast Asia. • Cloud services provided – Block storage, private networks, CDN, firewalls

OVHcloud

• https://us.ovhcloud.com

Server type – Dedicated cloud. • Data centers are located in North and South America, Europe, Australia, East Asia, Southeast Asia. • Cloud services provided – Block storage, cloud servers, private networks, load balancers.

CloudSigma

• https://www.cloudsigma. com

• Server type – Apache. • Cloud services provided – add-on storage, monitoring capabilities, security, load balancer, cloud firewalls, VPC. • Cloud services provided – add-on storage, monitoring capabilities, security, load balancer, cloud firewalls, VPC. • Cloud services provided – Optimized cloud computing, cloud GPU, block storage, object storage, load balancers. • Cloud services provided – IaaS, PaaS, cloud service for research and education, Cloud-as-a-Service, cloud hosting partner program.

Microsoft Azure

• https://azure.microsoft. com/en-in/

• Cloud services provided – Virtual machines, Azure virtual desktop, Azure cognitive services, Azure arc, Azure quantum, Azure PlayFab.

LiquidWeb DigitalOcean Vultr

• https://www.liquidweb. com • https://www. digitalocean.com • https://www.vultr.com

In addition to the above, there are other popular cloud providers such as the following: • • • • • • • •

Google Cloud Platform Oracle Cloud IBM Cloud LimeStone Alibaba Cloud CloudFlare Rackspace Heroku

There are many Software-as-a-Service applications such as the following: • Salesforce • Google Workshop Apps • Microsoft 365

Intelligent Decision Support System

• • • • • • • • • •

299

HubSpot Trello Netflix Zoom Zendesk Slack Shopify SAP Concur Adobe Creative Cloud Atlassian Jira

The basic benefits of SaaS applications include the following: • Reduced initial cost • Reduced time • Scalability Some of the examples of IaaS include the following. The goals of IaaS include – increased scalability, easy availability, enhanced reliability, increased security, enhanced flexibility and agility, increased serviceability, increased efficiency. • • • •

High Performance Computing Database Management System CPU-intensive processing Data-intensive processing

14.4.3.2  Big Data In general, big data refers to increasingly high volume of data, data with large variety and high velocity. These characteristics in the definition are known as three V’s – variety, volume and velocity. Although, the volume of data is mentioned as high but big data may deal with small data with very high complexity or large volume of data. Besides the above three characteristics, there is a fourth one – veracity, which explains how accurately is the data being able to predict business value. Big data can formally be defined as follows – “Big data is the capability to manage a huge volume of disparate data, at the right speed, and within the right time frame to allow real-time analysis and reaction” (Hurwitz et al., 2013). Big data includes both the structured and unstructured data from different social media, emails, and different texts and so on. Big data has gone much ahead of database management system. In reality, the big data management goes through the following phases (Figure 14.5). Big data architecture depends on many factors. The most important among those are: • The total amount of data to be managed. • Frequency of need to manage data in need.

FIGURE 14.5  Big data management cycle.

300

Decision Support System

• Total risk content. • Required speed to manage data. • Level of preciseness required. The data for big data can be structured or unstructured. Structured data may human-generated or computer- or machine-generated data. Machine-generated structured data include sensor data, web log data, point-of-sale data, financial data. Human-generated data may include input data of different forms, gaming-related data, click-stream data (data generated by clicks on websites). However, most raw data are unstructured data. Examples of machine-generated unstructured data include satellite images, various scientific data, video streams, audio data, photographic data, radar, or sonar data. Examples of human-generated unstructured data include text data internal to an organization, social media data, mobile data, and website content. The architecture of big data consists of many layers. The big data architecture is shown in Figure 14.6. On the left and right side of the figure, there are interfaces from/ to internet and from/ to applications respectively. The interfaces are actually open Application Programming Interfaces (APIs). At the lowest level of the figure, there is “redundant physical infrastructure”. The physical infrastructure for big data is quite different from that for traditional infrastructure. The data may be distributed over different locations which can be connected by networks. As in ordinary database, redundancy is important. Controlled redundancy is required for the security of the data stored. If one or more of the locations become inactive, even then the data in entirety can be recovered from the other locations because of the controlled redundancy. The next level represents security infrastructure. Security is a very essential issue in any application. The next level represents operational databases. Operational databases not only contain structured but unstructured data as well. Examples of unstructured data include content management systems, social media data, customer data in addition to different documents, graphical data, geospatial data and so on. Database tools and other tools for big data include tools like Hadoop, Big Table, and MapReduce. MapReduce was designed by Google. The Part of the name “Map” represents programming problem over large range of systems. After distribution computational task is accomplished, the “Reduce” component aggregates all the elements together to generate the desired result. Big Table was also designed by Google. Big Table organizes the data into tables with columns and rows. Big Table is a “sparse, distributed, persistent multidimensional sorted map”. “Hadoop is an

FIGURE 14.6  Big data architecture.

Intelligent Decision Support System

301

Apache-managed software framework derived from MapReduce and Big Table”. A brief introduction to MapReduce and Hadoop are provided next. 14.4.3.2.1  MapReduce MapReduce, originally designed as a programming tool, consists of two components ‘map’ and ‘reduce’. Today, MapReduce is used a programming tool and the reference implementation tool in order to check whether the model developed in MapReduce is working. Many implementations had been developed over the years some of which are open source ones and some are commercial ones. The function ‘map’ is a part of functional programming part of MapReduce just like many other languages like LISP. The function ‘map’ applies a function to each element (defined as a key-value pair) of a list and produces a new list (Hurwitz et al., 2013). The function ‘map’ does not affect the original list. The function ‘reduce’ is also a part of function programming side of MapReduce. “The reduce function takes the output of a map function and ‘reduces’ the list in whatever fashion the programmer desires” (Hurwitz et al., 2013). Hurwitz et al. (2013) mentioned overall method to integrate ‘map’ and ‘reduce’ function for the same applications. The steps as mentioned by him are given below:

1. Take a large number of data or records. 2. Iterate over the data. 3. Apply ‘map’ function on the data to work on the data and produce the output in the form of output list. 4. Further process the list toward optimized list. 5. Apply ‘reduce’ function in order to compute results. 6. Generate the final output.

14.4.3.2.2  Hadoop When most of the existing other technology fall short in handing huge amount of complex data, Hadoop comes in handy. Hadoop is an advanced technology that is capable to handle big data, especially, it can handle the three V’s of big data – volume, variety, and velocity. Hadoop provides a way to break a big data problem into smaller fragments so that the analysis of the program can be done fast and in cost-effective manner. Hadoop is available through Apache License with a capability to perform parallel data processing. Hadoop has two basic components at its core. These components are: • Hadoop Distributed File System, which is basically a data storage cluster. • MapReduce Engine, which is basically a “data processing implementation of the MapReduce Algorithm”. Hadoop can process both the structured and unstructured data in the range of even petabytes and thus, it is very suitable technology for huge amount of data. Hadoop is self-healing software and can detect failures and can adjust accordingly and can run without interruption. However, in addition to cloud computing and big data, there are many other modern and emergent technologies, especially the smart technologies which could be discussed as the examples of advanced technologies. But the detailed description of such technologies is beyond the scope of this chapter and thus, those are not discussed here anymore.

14.5  CONCLUSION This chapter has discussed some of the varieties of aspects of IDSS. At first, the chapter discusses EIS followed by KM. Different types of knowledge have been introduced along with the conversions among those. After this, two tools, concept map and semantic network, have been introduced with friendly examples. The next section discusses AI which is inherent automatically in IDSS.

302

Decision Support System

Under AI, brief introduction to propositional logic have been presented followed by followed by significant variety of nature-based optimization algorithms. A total of 38 different nature-based algorithms and strategies have been discussed briefly in this chapter. at last, two latest technologies required for AI such as cloud computing and big data have been discussed. The readers are expected to be enlightened by the knowledge base as presented in this chapter.

REFERENCES Abedinia, O., Amjady, N., Ghasemi, A. (2016). A new metaheuristic algorithm based on shark smell optimization. Complexity, 21, 5, 97–116. Arora, S., Singh, S. (2019). Butterfly optimization algorithm: a novel approach for global optimization. Soft Computing, 23, 3, 715–734. Askarzadeh, A. (2016). A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm. Computers & Structures, 169, 1–12. Bandyopadhyay, S., Bhattacharya, R. (2013). On some aspects of nature-based algorithms to solve multiobjective problems. Artificial Intelligence, Evolutionary Computing and Metaheuristics, Springer, Berlin, Heidelberg, 477–524. Bandyopadhyay, S., Bhattacharya, R. (2015). Finding optimum neighbor for routing based on multi-criteria, multi-agent and fuzzy approach. Journal of Intelligent Manufacturing, 26, 1, 25–42. Bansal, J. C., Sharma, H., Jadon, S. S., Clerc, M. (2014). Spider monkey optimization algorithm for numerical optimization. Memetic computing, 6, 1, 31–47. Bayraktar, Z., Komurcu, M., Werner, D. H. (2010). Wind Driven Optimization (WDO): A novel natureinspired optimization algorithm and its application to electromagnetics. In: 2010 IEEE Antennas and Propagation Society International Symposium (pp. 1–4). IEEE. Becerra-Fernandez, I., Sabherwal, R. (2015). Knowledge Management: Systems and Processes, 2nd edition. Routledge, NY. Bergeron, B. (2003). Essentials of Knowledge Management. John Wiley & Sons, Inc., NJ. Brammya, G., Praveena, S., Ninu Preetha, N. S., Ramya, R., Rajakumar, B. R., Binu, D. (2019). Deer hunting optimization algorithm: a new nature-inspired meta-heuristic paradigm. The Computer Journal. Doi: 10.1093/comjnl/bxy133. Cruz-Canha, M. M., Varajão, J. (2011). Enterprise Information Systems Design, Implementation and Management: Organizational Applications. Business Science Reference, Hershey, NY. Dhiman, G., Garg, M., Nagar, A., Kumar, V., Dehghani, M. (2021). A novel algorithm for global optimization: rat swarm optimizer. Journal of Ambient Intelligence and Humanized Computing, 12, 8, 8457–8482. Dhiman, G., Kumar, V. (2018). Emperor penguin optimizer: A bio-inspired algorithm for engineering Problems. Knowledge-Based Systems, 159, 20–50. Duman, E., Uysal, M., Alkaya, A. F. (2011). Migratory Birds Optimization: A New Meta-Heuristic Approachand Its Application to the Quadratic Assignment Problem. In: Di Chio C. et al. (Eds.): EvoApplications 2011, Part I, LNCS 6624, pp. 254–263. Esmat, R., Hossein, N.-P., Saeid, S. (2009). GSA: A gravitational search algorithm. Information Sciences, 179, 13, 2232–2248. Eusuff, M. M., Lansey, K. E. (2003). Optimization of water distribution network design using the shuffled frog leaping algorithm. Journal of Water Resources Planning and Management, 129, 3, 210–225. Fathollahi-Fard, A. M., Hajiaghaei-Keshteli, M., Tavakkoli-Moghaddam, R. (2020). Red deer algorithm (RDA): a new nature-inspired meta-heuristic. Soft Computing, 24, 19, 14637–14665. Ferreira, C (2001). Gene Expression Programming: A New Adaptive Algorithm for Solving Problems. arXiv preprint, cs/0102027. Gambardella, L. M., Dorigo, M. (1995). Ant-Q: A reinforcement learning approach to the traveling salesman problem. In Prieditis, A., Russell, S. (Eds.). proceedings of the 12th International Conference on Machine Learning, pp. 252–260, Morgan-Kaufman. Gandomi, A. H., Alavi, A. H. (2016). An introduction of krill herd algorithm for engineering optimization. Journal of Civil Engineering and Management, 22, 3, 302–310. Glover, F. (1986). Future paths for integer programming and links to artificial intelligence. Computers and Operations Research, 13, 5, 533–549. Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization & Machine Learning. Fifth Indian Repirnt. Pearson Education, New Delhi, India.

Intelligent Decision Support System

303

Gomes, G. F., da Cunha, S. S., Ancelotti, A. C. (2019). A sunflower optimization (SFO) algorithm applied to damage identification on laminated composite plates. Engineering with Computers, 35, 2, 619–626. Haddad, O. B., Afshar, A., Mariño, A. A. (2006). Honey-bees mating optimization (HBMO) algorithm: a new heuristic approach for water resources optimization. Water Resources Management, 20,5, 661–680. Hayyolalam, V., Kazem, A. A. P. (2020). Black widow optimization algorithm: a novel meta-heuristic approach for solving engineering optimization problems. Engineering Applications of Artificial Intelligence, 87, 103249. Horn, J., Nafpliotis, N., Goldberg, D.E. (1994). A Niched Pareto Genetic Algorithm for Multiobjective Optimization. In: Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence, vol. I, pp. 82–87. IEEE Service Center, Piscataway. Hurwitz, J., Nugent, A., Halper, F., Kaufman, M. (2013). Big Data for Dummies. John Wiley & Sons., NJ. Jahani, E., Chizari, M. (2018). Tackling global optimization problems with a novel algorithm–Mouth Brooding Fish algorithm. Applied Soft Computing, 62, 987–1002. Jain, M., Singh, V., Rani, A. (2019). A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm and Evolutionary Computation, 44, 148–175. James, J. Q., Li, V. O. (2015). A social spider algorithm for global optimization. Applied Soft Computing, 30, 614–627. Kaveh, A., Talatahari, S. (2010). A novel heuristic optimization method: Charged system search. Acta Mechanica, 213, 3–4, 267–289. Kirkpatrick, S., Gelatt, C. D., Vecchi, M P. (1983). Optimization by simulated annealing. Science, 220, 4598, 671–680. Knowles, J. D., Corne, D. W. (2000). Approximating the nondominated front using the Pareto archived evolution strategy. Evolutionary Computation, 8, 2, 149–172. Krishnanand, K. N., Ghose, D. (2006). Glowworm swarm based optimization algorithm for multimodal functions with collective robotics applications. Multiagent and Grid Systems, 2, 3, 209–222. Kumar, S., Datta, D., Singh, S. K. (2015). Black Hole Algorithm and Its Applications. In: Azar A.T. and Vaidyanathan S. (eds.,) Computational Intelligence Applications in Modeling and Control, Studies in Computational Intelligence, vol. 575, Springer. Masadeh, R., Mahafzah, B. A., Sharieh, A. (2019). Sea lion optimization algorithm. International Journal of Advanced Computer Science and Applications, 10, 5, 388. Mirjalili, S. (2016). Dragonfly algorithm: A new meta-heuristic optimization technique for solving singleobjective, discrete, and multi-objective problems. Neural Computing and Applications, 27, 4, 1053–1073. Mirjalili, S., Gandomi, A. H., Mirjalili, S. Z., Saremi, S., Faris, H., Mirjalili, S. M. (2017). Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Advances in Engineering Software, 114, 163–191. Mirjalili, S., Lewis, A. (2016). The whale optimization algorithm. Advances in engineering software, 95, 51–67. Mirjalili, S., Mirjalili, S. M., Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69, 46–61. Nadimi-Shahraki, M. H., Fatahi, A., Zamani, H., Mirjalili, S., Abualigah, L., Abd Elaziz, M. (2021). Migration-based moth-flame optimization algorithm. Processes, 9, 12, 2276. Neapolitan, R. E., Jiang, X. (2018). Artificial Intelligence: With an Introduction to Machine Learning, 2nd Edition. CRC Press, USA. Passino, K. M (2002). Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Systems Magazine, 22, 3, 52–67. Połap, D., Woźniak, M. (2017). Polar bear optimization algorithm: Meta-heuristic with fast population movement and dynamic birth and death mechanism. Symmetry, 9, 10, 203. Reynolds, R. G. (1994). An introduction to cultural algorithms. Proceedings of the Third Annual Conference on Evolutionary Programming, Vol. 24. World Scientific, River Edge. Rosen, K.H. (1998). Discrete Mathematics and Its Applications. McGraw Hill, China. Saremi, S., Mirjalili, S., Lewis, A. (2017). Grasshopper optimisation algorithm: theory and application. Advances in Engineering Software, 105, 30–47. Sarna, D. E. Y. (2011). Implementing and Developing Cloud Computing Applications. CRC Press, USA. Searle, J. R. (1980). Mind, brains and programs. Behavioral and Brain Sciences, 3, 3, 417–424. Shadravan, S., Naji, H. R., Bardsiri, V. K. (2019). The Sailfish Optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Engineering Applications of Artificial Intelligence, 80, 20–34. Shah-Hosseini (2009). The intelligent water drops algorithm: a nature-inspired swarm-based optimization algorithm. International Journal of Bio-Inspired Computation, 1, 1–2, 71–79.

304

Decision Support System

Shah-Hosseini, H. (2009). Optimization with the nature-inspired intelligent water drops algorithm. Evolutionary computation, 57, 2, 297–320. Srinivas, N., Deb, K. (1994). Multiobjective Optimization using Nondominated Sorting in Genetic Algorithms. Evolutionary Computations, 23, 3, 221–248. Storn, R., Price, K. V. (1995). Differential evolution – A simple and efficient adaptive scheme for global optimization over continuous spaces. Technical Report, ICSI, University of California, Berkeley. Wiig, K. (1999). Introducing knowledge management into the enterprise. In: Knowledge Management Handbook, Jay Liebowitz (ed.), 3-1 to 3-41, CRC Press, Boca Raton, FL. Yang, X. S. (2012). Flower pollination algorithm for global optimization. International Conference on Unconventional Computing and Natural Computation, pp. 240–249. Springer, Berlin, Heidelberg. Yang, X.-S. (2009). Firefly algorithms for multimodal optimization. International Symposium on Stochastic Algorithms, Springer, Berlin, Heidelberg. Yang, X.-S., Deb, S. (2009). Cuckoo search via Lévy flights. In: Proceedings of World Congress on Nature and Biologically Inspired Computing (NaBIC 2009), India, pp. 210–214. IEEE, USA. Yang, X.-S., Gandomi, A. H. (2012). Bat algorithm: a novel approach for global engineering optimization. Engineering Computations, 29, 5, 464–483. Yazdani, M., Jolai, F. (2016). Lion optimization algorithm (LOA): A nature-inspired metaheuristic algorithm. Journal of Computational Design and Engineering, 3, 1, 24–36. Zitzler, E., Thiele, L. (1999). Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Transactions on Evolutionary Computation, 3, 4, 257–271.

15 15.1 

DSS Software

INTRODUCTION

Decision support system (DSS) is basically used to aid the decision-making of the decision-makers. The decision-makers may be the top-level or middle-level or operational-level managers. However, when the amount of given data and/or the complexity inherent in the data and information on which decision is to be made is high, manual decision-making may not be possible. Thus, different softwares for the aid of decision-making have been developed. This chapter gives an overview and glimpse of popular softwares for most of the concepts as presented in this chapter. Thus, starting from Chapter 2 of this book, the concepts and techniques on which different softwares are available are listed below. This chapter presents software overview for selected tools and techniques as listed below: • • • • • • • • • • • • • • • • • •

Decision tree (DT) Decision table Predicate logic Fuzzy logic and fuzzy theory PERT/CPM Gantt chart Milestone chart Graphical evaluation and review technique Petri net Markov chain Case-based reasoning Multicriteria decision analysis Linear programming (LP) Simulation Big data analytics Regression Data warehousing Data mining

Some other benchmark softwares other than the proprietary softwares which can be applied in case of presenting new ideas are also introduced briefly. Thus, this chapter provides a window to the view of various views in the world of DSS softwares. At first, the following section overviews the software that is available for drawing DT.

15.2  SOFTWARE OVERVIEW FOR DT There are number of softwares available for DT as evident from different reliable websites. A list of different open-source software on DT is shown in Table 15.1 along with the respective websites. In addition to the above, there are some other softwares and tools available for drawing DT. These include the following. However, this section also presents brief overview of one of these softwares, KNIME.

DOI: 10.1201/9781003307655-15

305

306

Decision Support System

TABLE 15.1 List of Software for Drawing DT Software

Website

Characteristics

KNIME

• https://www.knime.com

• Suitable for data-driven innovation. • Consists of 1,500 modules. • Very suitable for building applications prediction or classification models using DT. • KNIME is an open-source software. • Suitable for Windows, Linux, and MAC OSX.

Rapid Miner

• https://docs.rapidminer. com/latest/studio/ operators/modeling/ predictive/trees/ parallel_decision_tree. html • https://silverdecisions.pl

• Provides data mining functions such as visualization, preprocessing, cleansing, filtering, clustering, predictive analysis with DT, and regression. • Runs on Windows, Linux, and MAC OSX. • This is an open-source software.

Orange

• https://orangedatamining. com

• Open-source data visualization software. • Provides various utilities such as DT, statistical distributions, box and whisker plots, scatter plots, heatmaps, hierarchical clustering, and linear regression models. • Runs on Windows, Linux, and MAC OSX.

SMILES

• http://dmip.webs.upv.es/ smiles/

• Open-source machine learning software. • Provides modules for various machine learning techniques. • Runs on Windows and Linux.

Scikit-learn

• https://scikit-learn.org/ stable/modules/tree.html

• • • •

SilverDecisions

OC1 DT Software System

• DT can be saved in.png,.svg or.json format. • Runs on Windows, Linux and MAC OSX. • This is an open-source software.

Machine learning library for Python. Data analysis and machine learning open-source tool. Features tree algorithms such as C4.5, C5.0, ID3, and CART. Runs on Windows, Linux, and MAC.

• Allows both standard and multivariate trees. • OC1 is written in ANSI C. • Supports cross-validation experiments.

GPL v3

• https://sites.google.com/ site/simpledecisiontree/ Home

• A simple Excel Add-On. • An open-source tool. • Runs on all operating systems with which Microsoft Office can be installed and used.

Rattle

• https://rattle.togaware. com/

• Provides facilities for statistical and visual summaries of data. • Runs on Windows, Linux, and MAC. • Rattle is used for teaching in different Australian and American universities.

• • • • • • •

SmartDraw Lucid chart ZingTree Sketchboard MindMeister GitMind Venngage DT maker

DSS Software

307

15.2.1 KNIME The software KNIME can be downloaded from the website as mentioned in Table 15.1. After downloading the required software in direct installation file or a folder containing the installation file, double click on the.exe file, and you will get screens as shown in Figure 15.1. Click on the Launch button – KNIME analytic platform opening screen will be seen as shown in Figure 15.2. The website also allows the user to learn from different documents which can be downloaded. The documentation includes the following:

FIGURE 15.1  Installation screen for KNIME.

FIGURE 15.2  Opening screen for KNIME.

308

• • • • • • •

Decision Support System

KNIME Analytics Platform Installation Guide KNIME Workbench Guide KNIME Best Practices Guide KNIME Components Guide KNIME File Handling Guide KNIME Flow Control Guide KNIME Integrated Deployment Guide

During the installation itself, a default workspace named “KNIME-workspace” is created in a specific directory as mentioned by the user. Workspace is just a folder to store KNIME workflows. The workbench is already shown in Figure 15.2. The various components in this screen shown in Figure 15.2 are mentioned below: • • • • • • •

KNIME explorer (on the top-left side of the window). Workflow coach (on the middle-left side of the window). Node repository (on the bottom-left side of the window). Workflow editor (in the middle if the window). Description and KNIME hub (on the upper-right side of the window). Console and node monitor (on the lower side of the window). Outline (on the bottom of the window).

Workflow editor is the component where workflows assembled. It contains different tasks represented by different nodes. A new workflow can be created by choosing File → New as shown in Figure 15.3. A new window appears as shown in Figure 15.4. Select “New KNIME Workflow” and click on Next. Give a name and click on Finish as shown in Figure 15.5. The editor window is shown in Figure 15.6. The visual properties of a workflow can be changed by clicking on the “Workflow Editor Settings” button on the toolbar of the editor window. On clicking on this button, a new window appears as shown in Figure 15.7.

FIGURE 15.3  Creating new workflow.

DSS Software

309

FIGURE 15.4  Wizard for creating workflow editor.

FIGURE 15.5  Last screen for creation of new editor.

In order to change the default workflow editor settings, choose File → Preferences → KNIME → KNIME GUI → Workflow Editor. KNIME explorer is used to manage the workflows. The different items that are shown in the explorer are given as follows: • • • •

Workflow Workflow group Data file Component/metanode

KNIME context menu appears on right-clicking on any item on the KNIME explorer as shown in Figure 15.8. This list shows the operations that can be performed on the workflow. Workflow is a sequence of flow of analysis. An example of steps of analysis is given below:

310

Decision Support System

FIGURE 15.6  Workflow editor.

FIGURE 15.7  Workflow editor settings window.

Step 1: Read data Step 2: Clean data Step 3: Filter data Step 4: Train a model In KNIME, tasks are denoted by nodes. A node, denoted by a colored box with input and output ports, is a single processing unit of a workflow and has much functionality such as reading from files, writing to files, training models, transforming data, creating visualizations, and so on. Data are transferred from output port of one node to the input port of another node through the connection. There are four states of a node as shown in Figure 15.9. State 1: Inactive node but not configured – denoted by red color State 2: Configured but no executed – denoted by yellow color State 3: Executed successfully – denoted by green color State 4: Executed with errors – denoted by red color with a cross

DSS Software

311

FIGURE 15.8  KNIME context menu.

FIGURE 15.9  Four states of a node.

State of a node can be changed by executing, configuring, or resetting the node. Nodes can be selected, moved, or replaced in a workflow. KNIME workflow can be saved as knwf (for a single workflow) or knar (for a group of workflows) file. However, the KNIME workbench consists of the following components. However, the detailed description of all the components is beyond the scope of this chapter. • Menu bar that contains menus on File, Edit, View, Node, Help. • Tool Bar, which contains many options such as New, Save, Save As, Save All, Undo/Redo, Node Alignment options, Auto Layout, Configuration of selected node, Execute selected node, Execute all nodes, Cancel all selected, Cancel all running nodes, Reset selected nodes, Edit custom node name and description, Open first out-port view, Open the first view of the selected node, Opens the Add Metanode Wizard, Append the IDs to Node Name, Hide all node names, Do one loop step, Pause loop execution, Resume loop execution, Open the settings dialog for the workflow editor, Open a dialog to edit usage and layout of component configuration nodes and views. • KNIME Explorer, showing the list of workflow projects available in the selected workspace. • Workflow Coach, showing the list of top most likely nodes to follow the currently selected node. • Workflow Editor, on which a node can be selected and dragged from the Node Repository to the workflow editor. • Node Description, showing the summary of the selected node. • Node Repository, containing all nodes in KNIME installation. • Outline, containing overview of the content of “workflow editor.” • Console, displaying errors and warning messages to the users.

312

Decision Support System

15.3  SOFTWARE OVERVIEW FOR NETWORKING TECHNIQUES There is number of software available for different network techniques, or in other words, the project management tools. A list of popular software is shown in Table 15.2. This section also shows a free open-source software called Orangescrum which can be downloaded from the website – https://orangescrum.com/. One will have register in this website in order to either download the open-source version or to use its cloud version. After registration, the following screen (Figure 15.10) will appear. Give a project name (e.g., “sampleproject”), and a screen as shown in Figure 15.11 will appear. Start a new project by clicking on the New button on the upperleft side of the window and selecting Project from the pull-down menu as shown in Figure 15.12. A new window will appear as shown in Figure 15.13. A new window will appear as shown in Figure 15.14. Click on “Task” resulting in a window as shown in Figure 15.14. Click on Create Task, and a Create Task window as shown in Figure 15.15 appears. On-screen explanations for various items on the window are explained in brief, and after that, it allows the user to input the tasks. In this way, an entire project’s tasks can be entered as shown in Figures 15.16 and 15.17. Now, different kinds of analysis can be performed on the entered data. Microsoft Project is a very popular software for drawing PERT/CPM network and analysis. But, this software does not come free of cost. Thus, this section shows the drawing of network diagram by SmartDraw. SmartDraw is another software which can be proprietary version or cloud version on which learners and analysts can work. SmartDraw is helpful for drawing flowcharts, floor plans, Gantt chart, maps, mind map, network diagram, organization chart, activity network, affinity diagram, authority matrix, balanced scorecard analysis, brand essence wheel, cause-and-effect diagram, DT, electrical diagram, cycle diagram, family tree, funnel chart, form designing, genogram, geography map, growth-share matrix, landscape drawing, mechanical diagram, marketing mix, pedigree chart, puzzle piece diagram, pyramid chart, roads and freeways, rack diagram, planogram, sales proposition chart, science diagram, shelving design, software diagram, step chart, swimlane chart, SWOT diagram, vehicle diagram, Venn diagram, workflow, and wireframe. This section

FIGURE 15.10  Installation option for Orangescrum after registration.

313

DSS Software

TABLE 15.2 List of Project Management or Networking Software Software

Website

Characteristics

Instagantt

• https://instagantt.com

• Provides functions for critical path method, Gantt and workload management, provides board and Kanban views. • Powerful and easy-to-use tool.

TeamGantt

• https://www.teamgantt. com/

• Provides functions for critical path method, Gantt and workload management, provides board and Kanban views. • Powerful and easy-to-use tool. • Features include baselines, drag and drop facility, multiple project view in one Gantt chart.

ProofHub

• https://www.proofhub. com/

• Managers can plan projects, visualize tasks, and modify schedules. • Basic features include multiple calendar views, timesheets, file and document sharing, real time in app and desktop notifications, chat tool for communication, markup tools for reviewing and suggesting edits, bookmark projects, tasks, discussions, etc.

Wrike

• https://www.wrike.com

• Allows one to see responsibilities and performance. • Basic features include critical path determination, folder hierarchy, and collaborative team editing.

Smartsheet

• https://www.smartsheet. com/

• Offers a platform where all types of project information can be kept. • Basic features include spreadsheet template, predecessor tasks, and automated workflows.

GanttPRO

• https://ganttpro.com/

• A systematic Gantt chart tool. • Basic features include workload tracking and management module, drag and drop facility, splitting tasks and subtasks, and merging dependencies.

Microsoft Project

• https://www.microsoft. com/en-us/store/ collections/project/pc

• Frequently applied and practiced software for project management. • Basic features include lists, attachments, schedule development, resource assignment, progress tracking, budget analysis, templates, and advanced reporting.

Click Up

• https://www. clickup.com/

• Allows users to schedule, manage dependencies, and prioritize tasks. • Basic features include multiple views, assigned comments, task tray, notepad, notifications, integrations, and dark mode.

Toggl Plan

• https://toggl.com/plan/

• Offers Gantt charts and Kanban boards. • Basic features include drag and drop, clean interface, team view, Kanban boards.

Monday

• https://www.monday. com/

• Provides CRM tools, project management, task management. • Basic features include CRM tools, collaboration tools, customization features, in-app automation, and use of templates.

shows the drawing of network diagram. The relevant website of cloud version in which diagrams can be drawn is https://cloud.smartdraw.com/ The cloud version of the respective software looks like the one as shown in Figure 15.18. Click on the network diagram version and draw the network.

314

Decision Support System

FIGURE 15.11  Opening screen for Orangescrum – Cloud version.

FIGURE 15.12  Option to create new project.

15.4  SOFTWARE OVERVIEW FOR MARKOV PROCESS AND MARKOV CHAIN There are not many softwares for Markov chain. A Python library named PyEmma can solve Markov problems. The library is used for estimation, validation, and analysis of Markov models in Python environment. The features of PyEmma are enlisted below (http://emma-project.org/latest/): • • • • • • •

PyEmma is capable of reading molecular dynamics data formats. PyEmma can perform time-lagged independent component analysis. PyEmma can perform clustering analysis/state space discretization. PyEmma can estimate and validate Markov state model. Bayesian estimation of Markov state model can also be done by PyEmma. Estimation for hidden Markov model can be done by PyEmma. PyEmma can perform transition path analysis.

DSS Software

FIGURE 15.13  New project window for Orangescrum.

FIGURE 15.14  New project’s window.

• The respective code is hosted at GitHub. • Coding is generally done in Python. Some of the Python functions for PyEmma include the following: • • • • • • •

Markov_model Timescales_msm Estimate_markov_model Bayesian_markov_model Timescales_hmsm Estimate_hidden_markov_model Bayesian_hideden_markov_model

The classes under which the above functions work are enlisted below: • • • • • •

ImpliedTimesacles ChapmanKolmogorovValidator MaximumLikelihood BayesianMSM OOMReweightedMSM BayesianHMSM

315

316

Decision Support System

FIGURE 15.15  Window for creating new task.

FIGURE 15.16  Entering data in creating task window.

15.5  SOFTWARE OVERVIEW FOR REGRESSION Previous chapter in this book has already demonstrated regression which also comes under the periphery of predictive analytics. Regression can also be done by simple Excel software. However, Table 15.3 shows a list of software implementing regression. However, simple Excel sheet is a very powerful tool which is available to everybody, and it is easy to use and understand. Excel has an inexpensive extension known as XLSTAT. XLSTAT has significant features especially for multivariate analysis which cannot be done in ordinary Excel. Let us first start with normal Excel and check its features. Excel can be used even for multiple regression as shown below. Consider the data as shown in Figure 15.19. At first, click on Data menu on menu bar and then select “Data Analysis” as shown below (Figure 15.20). A new Data Analysis window will appear (Figure 15.21). Select “Regression” as shown below. Click on OK.

DSS Software

FIGURE 15.17  Entering an entire project.

FIGURE 15.18  Cloud version of smartdraw software.

317

318

Decision Support System

TABLE 15.3 List of Some Regression Software Software

Website

Characteristics

JASP

• https://jasp-stats.org

• Runs on Windows and MAC. • Basic functions – ANOVA, T-test, descriptive statistics, reliability analysis, frequency tests, principal component analysis, exploratory factor analysis, confirmatory factor analysis, linear regression, Bayesian linear regression, Bayesian linear regression, logistic regression, correlation matrix, Bayesian correlation matrix, Bayesian correlation pairs. • Provides facilities for drawing graphs. • Results of regression analysis can be exported to a HTML file.

PSPP

• https://www.gnu.org/software/pspp/

• • • •

Statcato

• https://www.statcato.org/

• Free, portable, Java-based software. • It can provide facilities for hypothesis test, ANOVA, descriptive statistics, normality test, nonparametric test, data visualization graphs. • It provides features for linear regression, multiple regression, correlation matrix, nonlinear regression, and some more. • Results of regression analysis are stored in log window.

Jamovi

• https://www.jamovi.org/

• Free regression analysis software. • Runs on Windows, Linux, MAC, and Chrome OS. • Techniques provided by this software are regression analysis, exploration, T-test, ANOVA, frequency test, factor analysis, Bayesian method, survival analysis, metaanalysis, and some more. • Results of regression analysis can be saved in pdf or HTML format.

PAST

• https://past.en.lo4d.com/windows

• Runs on Windows and MAC. • Provide techniques such as different types of regression analysis, ANOVA, correlation, normality test, clustering, diversity tests, time-series analysis. • The software provides various data visualization facilities such as histogram, pie chart, bar chart, mosaic chart, radar chart, network chart, 3D plot, and some more.

A free regression analysis software. Provides linear and binary logistics regression. Also provides correlation analysis. Data can be imported from and exported to spreadsheet, Gnumeric spreadsheet files, text files, CSV file, and TSV file.

(Continued)

319

DSS Software

TABLE 15.3 (Continued) List of Some Regression Software Software

Website

Characteristics

KyPlot

• https://www.kyenslab.com/en-us/

• Provides techniques such as different types of regression analysis, descriptive analysis, contingency tables, parametric tests, nonparametric tests, multivariate analysis, survival analysis, and some more. • Various mathematical computations can also be performed by this software such as matrix operations, integrations, 1D Fourier transformation, time-series analysis, spectral analysis, wavelet analysis, nonlinear optimization, and some more. • Supports data in txt and xls formats.

Matrix

• https://www.matrix-software.com/

• Free regression analysis software for Windows. • The software provides different types of regression analysis, regression with different Box-Jenkins models. • Different mathematical operations can also be performed by this software such as vector operations, matrix operations, solution of simultaneous equations, and some more.

SHAZAM

• https://www.econometrics.com/

• Regression analysis software for Windows. • Provides different types of regression analysis, descriptive statistics. • Can be applied in many fields of study such as economics, biometrics, sociometrics, applied statistics, etc. • Can import data in CSV format. • Trial version can be used with limited features for unlimited number of times.

Draco Econometrics

• https://approximatrix.com/products/draco/

• Runs in Windows and Linux. • Performs different types of regression analysis, one-factor ANOVA. • Provides data plotting feature as well. • Supports files such as txt file, xls file, xlsx file, ods file, and some other formats as input data format.

As the “Input Y Range,” select the entire column Price including the label, “Price” as shown below (Figure 15.22). Now select the X ranges as shown below. Click on Labels and click OK as shown in Figure 15.23. You will get the results in a separate sheet as shown in Figure 15.24. Features like Principal Component can also be performed and shown in XLSTAT.

15.6  SOFTWARE OVERVIEW FOR LP There are many softwares to implement LP. Examples of such software include LINGO, LINDO, R software, and so on. LP can simply be done in Excel. However, this section demonstrates the software LINGO. The installation of the software is very simple. The opening screen of LINGO is shown in Figure 15.25. The outer window is the main frame window. All the other windows

320

FIGURE 15.19  Sample data for multiple regression.

FIGURE 15.20  Selection of “Data Analysis” option.

FIGURE 15.21  Data analysis window.

Decision Support System

DSS Software

321

FIGURE 15.22  Regression window.

FIGURE 15.23  Entering data in regression window.

are contained in this window. Once downloaded, the logo of LINGO software will be seen on the required location of your desktop as shown in Figure 15.26. The menu bar contains many options, such as File, Edit, Solver, Window, and Help. Below, Menu bar, there are different tasks to assist the writing of the program in the editor. These include New, Open, Save, Print, Cut, Copy, Paste, Undo, Redo, Find, Go To Line, Match Parenthesis, Solve, Solution, Matrix Picture, Options, Send to Back, Close All, Tile Windows, Help Topics, and Help. LINGO is an integrated solver that integrates many solvers in the same package. The respective solvers integrated in LINGO are enlisted below: • LP solver • General nonlinear solver • Global solver

322

FIGURE 15.24  Results of regression.

FIGURE 15.25  Opening screen of LINGO.

FIGURE 15.26  LOGO for LINGO.

Decision Support System

DSS Software

323

FIGURE 15.27  Folders containing samples.

• • • • • • • •

Multistart solver Barrier solver Simplex solvers Mixed integer solver Stochastic solver Model and solution analysis tools Quadratic recognition tools Linearization tools

A sample window can now be opened. Choose File → Open and open the folder where the required folders of LINGO are copied during installation as shown in Figure 15.27. Open the “Samples” folder and see the LINGO model files as shown in Figure 15.28. LINGO files have extension.lg4. In order to view a sample program, say CHESS.lg4, select the file, and open. Figure 15.28 shows the resulting window. This programmed formulated problem can now be solved using the Solver/Solve command that is located in the top portion of the LINGO window as shown in Figure 15.29. After pressing the Solver button, two results windows are shown as shown in Figures 15.30 and 15.31. Figure 15.30 shows that the objective function value is 2692.31. Some other details on the output are also shown in this window. Figure 15.31 shows solution report for the program CHESS.lg4. This window shows the details of the problem and the result. The solution report window shows the following: • • • • • • • • • • •

Objective function value. Total number of iteration. Elapsed time as runtime. Total number of variables. Total number of nonlinear variables. Total number of integer variables. Total number of constraints. Total number of nonlinear constraints. The descriptions and the values of each variable. Slack variables and their values. Dual prices for the variables.

324

FIGURE 15.28  LINGO program, CHESS.lg4.

FIGURE 15.29  Solver/Solve button to execute a LINGO program.

FIGURE 15.30  LINGO solver status.

Decision Support System

325

DSS Software

FIGURE 15.31  Solution report for CHESS.lg4.

For machines without Windows or Linux or MAC, LINGO programs can be executed from command prompt. Now consider the LP as given below. This problem will be executed in LINGO. The problem is again mentioned below. Maximize

Z = 4x1 + 3x 2 + 4x 3 + 6x 4

Subject to the constraints: x1 + 2x 2 + 2x 3 + 4x 4 ≤ 80 2x1 + 2x 3 + x 4 ≤ 60 3x1 + 3x 2 + x 3 + x 4 ≤ 80

x1 , x 2 , x 3 , x 4 ≥ 0

The objective function in this problem is

Maximize Z = 4x1 + 3x 2 + 4x 3 + 6x 4

In LINGO, the objective function is given below. In LINGO, each mathematical expression is finished with a semicolon. The constraints are

MAX = 4 * X 1 + 3 * X 2 + 4 * X 3 + 6 * X 4;

326

Decision Support System

FIGURE 15.32  An example on LP.

x1 + 2x 2 + 2x 3 + 4x 4 ≤ 80 2x1 + 2x 3 + x 4 ≤ 60 3x1 + 3x 2 + x 3 + x 4 ≤ 80

x1 , x 2 , x 3 , x 4 ≥ 0

In LINGO, the constraints can be written as shown below. These particular codes are entered in LINGO as shown in Figure 15.32. The expressions can be split into parts. The comments start with an exclamatory sign (?) and ends with a semicolon. The model as shown in Figure 15.31 is now solved as before, and the results are shown in Figures 15.33 and 15.34.

X 1 + 2 * X 2 + 2 * X 3 + 4 * X 4