200 74 7MB
English Pages 328 Year 2010
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
UNCERTAINTY TREATMENT USING PARACONSISTENT LOGIC
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Frontiers in Artificial Intelligence and Applications Volume 211 Published in the subseries
Knowledge-Based Intelligent Engineering Systems Editors: L.C. Jain and R.J. Howlett Recently published in KBIES:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Vol. 204. B. Apolloni, S. Bassis and C.F. Morabito (Eds.), Neural Nets WIRN09 – Proceedings of the 19th Italian Workshop on Neural Nets, Vietri sul Mare, Salerno, Italy, May 28–30 2009 Vol. 203. M. Džbor, Design Problems, Frames and Innovative Solutions Vol. 196. F. Masulli, A. Micheli and A. Sperduti (Eds.), Computational Intelligence and Bioengineering – Essays in Memory of Antonina Starita Vol. 193. B. Apolloni, S. Bassis and M. Marinaro (Eds.), New Directions in Neural Networks – 18th Italian Workshop on Neural Networks: WIRN 2008 Vol. 186. G. Lambert-Torres et al. (Eds.), Advances in Technological Applications of Logical and Intelligent Systems – Selected Papers from the Sixth Congress on Logic Applied to Technology Vol. 180. M. Virvou and T. Nakamura (Eds.), Knowledge-Based Software Engineering – Proceedings of the Eighth Joint Conference on Knowledge-Based Software Engineering Vol. 170. J.D. Velásquez and V. Palade, Adaptive Web Sites – A Knowledge Extraction from Web Data Approach Vol. 149. X.F. Zha and R.J. Howlett (Eds.), Integrated Intelligent Systems for Engineering Design Recently published in FAIA: Vol. 210. O. Kutz, J. Hois, J. Bao, B. Cuenca Grau (Eds.), Modular Ontologies – Proceedings of the Fourth International Workshop (WoMO 2010) Vol. 209. A. Galton and R. Mizoguchi (Eds.), Formal Ontology in Information Systems – Proceedings of the Sixth International Conference (FOIS 2010) Vol. 208. G.L. Pozzato, Conditional and Preferential Logics: Proof Methods and Theorem Proving Vol. 207. A. Bifet, Adaptive Stream Mining: Pattern Learning and Mining from Evolving Data Streams
ISSN 0922-6389 (print) ISSN 1879-8314 (online)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Uncertainty Treatment Using Paraconsistent Logic Introducing Paraconsistent Artificial Neural Networks
João Inácio da Silva Filho Santa Cecilia University, UNISANTA, Santos, Brazil
Germano Lambert-Torres Itajuba Federal University, UNIFEI, Itajuba, Brazil
and
Jair Minoro Abe
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Paulista University, UNIP, São Paulo, Brazil
Amsterdam • Berlin • Tokyo • Washington, DC
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
© 2010 The authors and IOS Press. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without prior written permission from the publisher. ISBN 978-1-60750-557-0 (print) ISBN 978-1-60750-558-7 (online) Library of Congress Control Number: 2010926677 doi: 10.3233/978-1-60750-558-7-i Publisher IOS Press BV Nieuwe Hemweg 6B 1013 BG Amsterdam Netherlands fax: +31 20 687 0019 e-mail: [email protected]
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Distributor in the USA and Canada IOS Press, Inc. 4502 Rachael Manor Drive Fairfax, VA 22032 USA fax: +1 703 323 3668 e-mail: [email protected]
LEGAL NOTICE The publisher is not responsible for the use which might be made of the following information. PRINTED IN THE NETHERLANDS
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Uncertainty Treatment Using Paraconsistent Logic J.I. da Silva Filho et al. IOS Press, 2010 © 2010 The authors and IOS Press. All rights reserved.
v
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Preface Several strictly theoretical papers present Paraconsistent Logic as a good solution to carry out treatment of situations where the Classical Logic is ineffectual or unable to be applied, due to being binary. These situations like the ambiguities, lack of definition (indefinition), and mainly the inconsistencies often appear and are frequently described in the real. By the end of last century, some interesting work that showed the applications of Paraconsistent Logic in several areas of Artificial Intelligence had already appeared in Brazil, mainly at the Polytechnic School of the University of São Paulo. In the 1990s, among the countless papers presented, the “Methods of Applications of Paraconsistent Annotated Logic with annotation of two values (PAL2v) with Construction of Algorithm and Implementation of Electronic Circuits” stood out. It was defended in 1999 by one of the authors of this book, and from then on, has become a reference material for various researches on the applications of Paraconsistent Logic. Defended in 1999, the thesis brought Paraconsistent Logic from a strictly theoretical field into a simpler, practical, and direct application, enabling Control Systems to carry out treatment of situations uncovered by Classical Logic and thus conquering a significant advance in the way of treating contradictory signals. The methods were based on a class of NonClassical Logic, which was named Paraconsistent Annotated Logic with annotation of two values (PAL2v). The signal analysis utilizing PAL2v permitted several problems caused by contradictory and paracomplete situations to be treated in a way closer to reality, through consideration of evidences. This interpretation method has brought relevant results that led to the construction of the algorithm named “Para-Analyzer”, which implemented in conventional computer language, provides direct application of the concepts of Paraconsistent Logic in Control Systems, Automation and Robotics. Still in this work presented in 1999, various suggestions were also offered on the application of a Paraconsistent Logic Controller (Para-Control) in Control Systems. The ParaControl, demonstrated, for the first time, the applicability of Paraconsistent Logics in real and functional Systems and the concepts and methods presented there produced several Master and PhD thesis related to the application of PAL2v in other fields of knowledge. From this initial work, new researches have been done on the applications, and the basic concepts of PAL2v are presented in this book with a few changes, bringing remodeled and adapted nomenclature, aggregating the new contributions, which appeared from 1999 until now. With these adaptations and new considerations, like the calculus of the Real Degree of Certainty, and the Interval of Certainty, the reach of the PAL2v applications becomes wider and introduces precision and greater robustness in the Systems applying PAL2v for analysis and decision. The formation of Paraconsistent Analysis Systems, or Paraconsistent Analysis Nodes (PANs) as named in this work, acting in Paraconsistent Analysis Networks for decision making and Paraconsistent Artificial Neural Cells (PANCs) interconnected into Paraconsistent Artificial Neural Networks, promote an Uncertainty Treatment
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
vi
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
forcing the System to give an answer. This innovating form of Uncertainty Treatment enables the method presented in this book to be utilized in Systems, which deals with data originated from Uncertain Knowledge information banks, without the weight of conflict invalidating the calculus for decision making. A few examples of projects and Systems that utilize the methods with their main algorithms accompany this book. Despite being directed to determined areas, these examples motivate the implementation of new and promising Uncertainty Treatment Systems with Paraconsistent Logic in several other fields of knowledge. The authors would like to thank Professor Luis Fernando Pompeu Ferrara and Professor Maurício C. Mário for the support in the test with the Learning Paraconsistent Artificial Neural Cell (lPANC) and the Computer Engineering student at UNISANTA – Santa Cecília University-Santos-ESP, Brazil – Gilberto T. A. Holms for the help in the validation of the PAL2v algorithms. Also the authors would like to express their gratitude to Prof. Helga Gonzaga Martins and Zilma de Castro for the work of translation of this book.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
vii
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
List of Symbols PL PAL PAL2v USCP μ1 λ DC DCR DCmaxt DCmaxF DCe DCest Dctr μE μER μctr φ φE φER T t F ⊥ T →t T→ F ⊥ →t ⊥ →F Qt →T QF →T Qt→ ⊥ QF → ⊥ F→ ⊥ t →⊥ F→T t→T Q-t PLC Vccs Vcci Vctcs Vctci
Paraconsistent Logic Paraconsistent Annotated Logic Paraconsistent Annotated Logic with annotation of two values Unitary Square on the Cartesian Plane Favorable Degree of Evidence1 Unfavorable Degree of Evidence Calculated Degree of Certainty Real Calculated Degree of Certainty Maximum Degree of Certainty tending to True Maximum Degree of Certainty tending to False Resultant Degree of Certainty Estimated Degree of Certainty Degree of Contradiction Resultant Degree of Evidence Resultant Real Degree of Evidence Normalized Degree of Contradiction Interval of Certainty Interval of Evidence Real Interval of Evidence Inconsistent True False Paracomplete or Indeterminate Inconsistent tending to True Inconsistent tending to False Indeterminate tending to True Indeterminate tending to False Quasi True tending to Inconsistent Quasi False tending to Inconsistent Quasi True tending to Indeterminate Quasi False tending to Indeterminate False tending to Indeterminate True tending to Indeterminate False tending to Inconsistent True tending to Inconsistent Quasi True Paraconsistent Logic Controller (Para-Control) Certainty Control Superior Value Certainty Control Inferior Value Contradiction Control Superior Value Contradiction Control Inferior Value
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
viii
¬ ∧ ∨ → P PAN PANet PANC PANU PANS PANNet CerTF CtrTF CtrCSV CtrCIV CerCSV CerCIV DecTF TLV FLV PANC sPANC aPANC RaPANC PANCSiLC PANCSeLC cPANC PANCC PANCED PANCD cPANCD lF ulF
Negation Logic Connective Conjunction Logic Connective or “ AND” Disjunction Logic Connective or “OR” Logic Implication Proposition Paraconsistent Analysis Node Paraconsistent Analysis Network Paraconsistent Artificial Neural Cell Paraconsistent Artificial Neural Unit Paraconsistent Artificial Neural System Paraconsistent Artificial Neural Network Certainty Tolerance Factor Contradiction Tolerance Factor Contradiction Control Superior Value Contradiction Control Inferior Value Certainty Control Superior Value Certainty Control Inferior Value Decision Tolerance Factor Truth Limit Value Falsehood Limit Value Paraconsistent Artificial Neural Cell Standard Paraconsistent Artificial Neural Cell Analytical Paraconsistent Artificial Neural Cell Real Analytical Paraconsistent Artificial Neural Cell Paraconsistent Artificial Neural Cell of Simple Logical Connection Paraconsistent Artificial Neural Cell Selective Connection Crossing Paraconsistent Artificial Neural Cell Paraconsistent Artificial Neural Cell of Complementation Paraconsistent Artificial Neural Cell of Equality Detection Paraconsistent Artificial Neural Cell of Decision Crossing Paraconsistent Artificial Neural Cell of Decision Learning Factor Unlearning Factor
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
ix
Contents Preface ........................................................................................................................ .... v List of Symbols............................................................................................................. vii Initial Comments .......................................................................................................... 1 Introduction.............................................................................. ..................................... .. 1 A.1. Objectives ............................................................................................................... 1 A.2. Organization of the book......................................................................................... 2
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Part 1. Paraconsistent Annotated Logic (PAL) Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL) ....................... 7 Introduction................................................................................................................... .. 7 1.1. Logic ..................................................................................................................... ... 7 1.2. The Non-Classical Logic ......................................................................................... 9 1.3. Paraconsistent Logic .............................................................................................. 10 1.3.1. Historical Aspects of Paraconsistent Logic ................................................. 10 1.3.2. Inconsistent Theories and Trivial Theories.................................................. 11 1.3.3. Conceptual Principles of Paraconsistent Logic............................................ 12 1.4. Paraconsistent Annotated Logic............................................................................. 12 1.4.1. Representation of Paraconsistent Annotated Logic (PAL) .......................... 13 1.4.2. First Order Paraconsistent Annotated Logic Language ............................... 14 1.4.3. A Single Valued Paraconsistent Annotated Logic ....................................... 17 1.5. Paraconsistent Annotated Logic with Annotation of Two Values (PAL2v).......... 20 1.5.1. PAL2v Language Primitive Symbols .......................................................... 21 1.5.2. Considerations on Lattice Associated to Paraconsistent Annotated Logic with Annotation of Two Values (PAL2v) ......................................... 22 1.5.3. The Logic Negation of PAL2v .................................................................... 25 1.6. Final Remarks........................................................................................................ 26 Exercises...................................................................................................................... . 27 Chapter 2. Paraconsistent Annotated Logic Application Methodology................. 29 Introduction................................................................................................................... 29 2.1. Paraconsistent Logic in Uncertain Knowledge Treatment..................................... 29 2.2. Algebraic Interpretations of PAL2v....................................................................... 30 2.2.1. The Unitary Square on the Cartesian Plane (USCP).................................... 31 2.2.2. Algebraic Relations Between the USCP and the PAL2v Lattice................. 31 2.2.3. Geometric Relations Between the USCP and the PAL2v Lattice................ 38 2.3. The Para-Analyzer Algorithm................................................................................ 42 2.3.1. Paraconsistent Annotated Logic with Annotation of Two Values “Para-Analyzer” Algorithm ......................................................................... 44 2.4. Para-Analyzer Algorithm Application ................................................................... 45 2.5. Final Remarks........................................................................................................ 48 Exercises...................................................................................................................... . 49
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
x
Part 2. Paraconsistent Analysis Networks (PANet)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Chapter 3. Fundamentals of Paraconsistent Analysis Systems............................... 53 Introduction................................................................................................................... 53 3.1. Uncertainty Treatment Systems for Decision Making........................................... 53 3.2. Uncertainty Treatment Systems for Decision Making Using PAL2v .................... 55 3.2.1. Study on the Representation of the PAL2v Lattice for Uncertainty Treatment..................................................................................................... 55 3.2.2. The Interval of Certainty φ .......................................................................... 58 3.2.3. Representation of the Resultant Degree of Certainty................................... 59 3.2.4. The Estimated Degree of Certainty.............................................................. 61 3.2.5. Input Data Variations in Relation to the Estimated Degree of Certainty ..... 70 3.2.6. The Real Degree of Certainty ...................................................................... 70 3.2.7. The Influence of Contradiction on the Real Degree of Certainty ................ 73 3.2.8. Representation of the Real Resultant Interval of Certainty.......................... 77 3.2.9. Recovering the Values of Degrees of Certainty and Contradiction ............. 78 3.3. Algorithms for Uncertainty Treatment Through Paraconsistent Analysis ............. 82 3.3.1. PAL2v Paraconsistent Analysis Algorithm with Resultant Degree of Certainty Output .......................................................................................... 82 3.3.2. PAL2v Paraconsistent Analysis Algorithm to Estimate the Degrees of Certainty and Evidence Input Values .......................................................... 83 3.3.3. PAL2v Paraconsistent System Algorithm with Feedback ........................... 84 3.4. Final Remarks........................................................................................................ 85 Exercises...................................................................................................................... . 85 Chapter 4. Paraconsistent Analysis System Configurations ................................... 88 Introduction................................................................................................................... 88 4.1. Typical Paraconsistent Analysis Node (PAN) ....................................................... 88 4.1.1. Paraconsistent Analysis Node-PAN Rules .................................................. 89 4.1.2. Transformation of the Real Degree of Certainty into Resultant Degree of Evidence.................................................................................................. 90 4.1.3. Resultant Real Degree of Evidence μER ....................................................... 91 4.1.4. The Normalized Degree of Contradiction µctr .............................................. 92 4.1.5. The Resultant Interval of Evidence φE ......................................................... 95 4.2. The Algorithms of the Paraconsistent Analysis Nodes (PANs)........................... 102 4.2.1. PAL2v Paraconsistent Analysis Algorithm with Resultant Real Degree of Evidence Output.................................................................................... 102 4.2.2. PAL2v Paraconsistent Analysis Algorithm with Calculus of the Normalized Degree of Contradiction and Interval of Evidence................. 103 4.3. Final Remarks...................................................................................................... 104 Exercises..................................................................................................................... 1 05 Chapter 5. Modeling of Paraconsistent Logical Signals ........................................ 107 Introduction................................................................................................................. 10 7 5.1. Contradiction and Paraconsistent Logic .............................................................. 107 5.1.1. PAL2v Annotation Modeling .................................................................... 109 5.1.2. Applications of Models for the Mining of Degrees of Evidence ............... 113 5.2. Treatment of Contradiction in the Modeling of the Evidence Signal .................. 120
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
xi
5.3. Final Remarks...................................................................................................... 122 Exercises..................................................................................................................... 1 23
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment........... 127 Introduction................................................................................................................. 12 7 6.1. Paraconsistent Analysis Network (PANet) .......................................................... 127 6.1.1. Rules for Paraconsistent Analysis Network............................................... 128 6.1.2. Basic Configuration of a Paraconsistent Analysis Network ...................... 128 6.1.3. Paraconsistent Analysis Networks Algorithms and Topologies ................ 130 6.1.4. PAL2v Paraconsistent Analysis Algorithm with the Disabling of the PAN Due to Indefinition............................................................................ 135 6.2. Three-Dimensional Paraconsistent Analysis Network......................................... 136 6.2.1. Paraconsistent Analyzer Cube ................................................................... 137 6.2.2. Construction of a Paraconsistent Analyzer Cube....................................... 137 6.3. Algorithms of the Paraconsistent Analyzer Cube ................................................ 144 6.3.1. Modeling of the Paraconsistent Analyzer Cube with the Value of the External Interval of Evidence .................................................................... 144 6.3.2. Paraconsistent Analyzer Cube Algorithm Modeled with Interval of Evidence .................................................................................................... 148 6.3.3. Modeling of a Paraconsistent Analyzer Cube with the Value of the External Degree of Contradiction.............................................................. 149 6.3.4. Paraconsistent Analyzer Cube Algorithm with the External Degree of Contradiction......................................................................................... 153 6.4. Paraconsistent Analysis Network Topologies with Analyzer Cubes ................... 154 6.4.1. Paraconsistent Analysis Network with PAN and one Paraconsistent Analyzer Cube ................................................................... 154 6.4.2. Analysis Network with Inconsistency Filter Composed of Paraconsistent Analyzer Cubes.................................................................. 155 6.5. Final Remarks...................................................................................................... 156 Exercises..................................................................................................................... 1 56 Part 3. Paraconsistent Artificial Neural Networks (PANNets) Chapter 7. Paraconsistent Artificial Neural Cell ................................................... 163 Introduction................................................................................................................. 16 3 7.1. Neural Computation and Paraconsistent Logic.................................................... 163 7.1.1. A Basic Paraconsistent Artificial Cell (bPAC) .......................................... 165 7.2. The Standard Paraconsistent Artificial Neural Cell (sPANC) ............................. 167 7.2.1. sPANC Fundamental Concepts.................................................................. 168 7.3. Composition of the Standard Paraconsistent Artificial Neural Cell (sPANC)..... 183 7.3.1. Algorithm of the Standard Paraconsistent Artificial Neural Cell (sPANC) .................................................................................................... 185 7.4. Final Remarks...................................................................................................... 187 Exercises..................................................................................................................... 1 88 Chapter 8. Paraconsistent Artificial Neural Cell Family ...................................... 191 Introduction................................................................................................................. 19 1 8.1. Family of Paraconsistent Artificial Neural Cells ................................................. 191 8.2. The Analytical Paraconsistent ArtificialNeural Cell (aPANC) ........................... 192
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
xii
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
8.2.1. Algorithm of the Analytical Paraconsistent Artificial Neural Cell (aPANC) ............................................................................................ 194 8.3. The Real Analytical Paraconsistent Artificial Neural Cell (RaPANC)................ 194 8.3.1. Algorithm of the Real Analytical Paraconsistent Artificial Neural Cell (RaPANC).......................................................................................... 196 8.4. The Paraconsistent Artificial Neural Cell of Simple Logical Connection (PANCSiLC) .......................................................................................................... 197 8.4.1. Algorithm of the Paraconsistent Artificial Neural Cell of Simple Logical Connection (PANCSiLC)................................................................ 198 8.5. The Paraconsistent Artificial Neural Cell of Selective Logic Connection (PANCSeLC) .......................................................................................................... 199 8.5.1. Algorithm of the Paraconsistent Artificial Neural Cell of Selective Logical Connection (PANCSeLC) ............................................................... 200 8.6 Crossing Paraconsistent Artificial Neural Cell (cPANC) ..................................... 201 8.6.1. Algorithm of the Crossing Paraconsistent Artificial Neural Cell (cPANC).................................................................................................... 202 8.7. Paraconsistent Artificial Neural Cell of Complementation (PANCC).................. 203 8.7.1. Algorithm of the Paraconsistent Artificial Neural Cell of Complementation (PANCC)....................................................................... 204 8.8. Paraconsistent Artificial Neural Cell of Equality Detection (PANCED)............... 204 8.8.1. Algorithm of the Paraconsistent Artificial Neural Cell of Equality Detection (PANCED).................................................................................. 206 8.9. Paraconsistent Artificial Neural Cell of Decision (PANCD)................................ 206 8.9.1. Algorithm of the Paraconsistent Artificial Neural Cell of Decision (PANCD).................................................................................................... 208 8.10. Crossing Paraconsistent Artificial Neural Cell of Decision (cPANCD) ............. 208 8.10.1. Algorithm of the Crossing Paraconsistent Artificial Neural Cell of Decision (cPANCD)............................................................................... 210 8.11. Final Remarks .................................................................................................... 210 Exercises..................................................................................................................... 2 11 Chapter 9. Learning Paraconsistent Artificial Neural Cell................................... 213 Introduction................................................................................................................. 21 3 9.1. Learning Paraconsistent Artificial Neural Cell (lPANC)..................................... 213 9.1.1. Learning of a Paraconsistent Artificial Neural Cell................................... 215 9.1.2. Algorithm of the Learning Paraconsistent Artificial Neural Cell (lPANC) (for Truth Pattern) ...................................................................... 217 9.1.3. Algorithm of the Learning Paraconsistent Artificial Neural Cell (lPANC) (for Falsehood Pattern)............................................................... 219 9.1.4. Recognition of the Pattern to be Learned................................................... 220 9.1.5. Unlearning of a Paraconsistent Artificial Neural Cell ............................... 221 9.2. Studies on the Complete Algorithm of the lPANC with Learning and Unlearning ........................................................................................................... 222 9.2.1. Complete Algorithm of the Learning of the Paraconsistent Artificial Neural Cell (lPANC) ................................................................................. 224 9.3. Results Obtained in the Training of a Learning Paraconsistent Artificial Neural Cell (lPANC) ........................................................................................... 225 9.4. Training of a lPANC with the Maximum Values of the Learning (lFT) and Unlearning (ulFT) Factors..................................................................................... 226
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
xiii
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
9.4.1. Simplified Representation.......................................................................... 227 9.4.2. lPANC Tests with Variations in the Values of the Learning lFT and Unlearning ulFT Factors.............................................................................. 228 9.4.3. lPANC Tests with Applications of Several Patterns of Different Values and Maximum Learning Factor ..................................................... 229 9.5. Final Remarks...................................................................................................... 232 Exercises..................................................................................................................... 2 32 Chapter 10. Paraconsistent Artificial Neural Units ............................................... 234 Introduction................................................................................................................. 23 4 10.1. Para-Perceptron – The Paraconsistent Artificial Neuron.................................. 234 10.2. The Biological Neuron....................................................................................... 235 10.3. The Artificial Neuron......................................................................................... 239 10.4. Composition of the Paraconsistent Artificial Neuron Para-Perceptron.............. 241 10.4.1. Learning Algorithm with the Inclusion of the Crossing Cell of Decision ................................................................................................. 245 10.5. Para-Perceptron Models..................................................................................... 245 10.6. Test of a Typical Paraconsistent Artificial Neural Para-Perceptron .................. 247 10.7. Other Types of Paraconsistent Artificial Neural Units (PANUs) ...................... 249 10.7.1. The Learning Paraconsistent Artificial Neural Unit with Activation Through Maximization (lPANUAM)....................................................... 249 10.7.2. Learning Paraconsistent Artificial Neural Unit of Control and Pattern Activation (lPANUCPA).............................................................. 250 10.7.3. Learning Paraconsistent Artificial Neural Unit with Instantaneous Analysis (lPANUIA) ............................................................................... 251 10.7.4. Learning Paraconsistent Artificial Neural Unit Through Pattern Equality (lPANUPE) ............................................................................... 251 10.7.5. Learning Paraconsistent Artificial Neural Unit Through Repetition of Pattern Pairs (lPANURPP)................................................................... 252 10.7.6. The Paraconsistent Artificial Neural Unit with Maximum Function (PANUmaxf)............................................................................................. 253 10.7.7. The Paraconsistent Artificial Neural Unit with Minimum Function (PANUmimf) ............................................................................................ 254 10.7.8. The Paraconsistent Artificial Neural Unit of Selective Competition (PANUSeC).............................................................................................. 255 10.7.9. The Paraconsistent Artificial Neural Unit of Pattern Activation (PANUPact).............................................................................................. 256 10.8. Final Remarks .................................................................................................... 257 Exercises..................................................................................................................... 2 58 Chapter 11. Paraconsistent Artificial Neural Systems .......................................... 260 Introduction................................................................................................................. 26 0 11.1. Paraconsistent Artificial Neural System of Conditioned Learning – PANSCL... 260 11.1.1. Conditioned Learning............................................................................. 261 11.2. Basic Configuration of the PANSCL................................................................... 263 11.2.1. Test with PANSCL .................................................................................. 269 11.3. Paraconsistent Artificial Neural System and Contradiction Treatment (PANSCT) ........................................................................................................... 270 11.3.1. Pattern Generator for the PANSCT.......................................................... 271
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
xiv
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
11.3.2. PANSCT Block Diagram......................................................................... 271 11.3.3. The Basic Configuration of the PANSCT ................................................ 272 11.3.4. Tests with PANSCT ................................................................................. 274 11.4. Final Remarks .................................................................................................... 277 Exercises..................................................................................................................... 2 77 Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks....... 279 Introduction................................................................................................................. 27 9 12.1. Proposal of the Paraconsistent Artificial Neural Networks Architecture........... 280 12.1.1. Description of the PANNet Functioning ................................................ 282 12.2. Learning, Comparison, and Signal Analysis Modules of PANNet .................... 283 12.2.1. Paraconsistent Artificial Neural Unit of Primary Learning and Pattern Consultation............................................................................... 283 12.2.2. Paraconsistent Artificial Neural Unit of Pattern Activation ................... 285 12.2.3. Paraconsistent Artificial Neural Unit of Selective Competition............. 286 12.2.4. Paraconsistent Artificial Neural System of Knowledge Acquisition (PANSKA)............................................................................................... 288 12.3. Logical Reasoning Module for the Control of a PANNet.................................. 290 12.3.1. The Paraconsistent Artificial Neural Network of Logical Reasoning, PANNLR ................................................................................................. 291 12.3.2. Configuration of the Paraconsistent Artificial Neural Network System of Logical Reasoning (PANSLR)................................................ 292 12.3.3. Paraconsistent Artificial Neural System of Logical Reasoning of Minimization (PANSLRMin) .................................................................... 293 12.3.4. Paraconsistent Artificial Neural System of Logical Reasoning of Maximization (PANSLRMax) ................................................................... 296 12.3.5. Paraconsistent Artificial Neural System of Exclusive OR Logical Reasoning (PANSExORLR) ....................................................................... 298 12.3.6. Paraconsistent Artificial Neural System of Complete Logical Reasoning (PANSCLR)............................................................................ 300 12.4. Final Remarks .................................................................................................... 301 Exercises..................................................................................................................... 3 02 Final Comments ........................................................................................................ 304 Introduction................................................................................................................. 30 4 E.1. Applications ........................................................................................................ 305 E.2. Final Remarks ..................................................................................................... 308 References................................................................................................................... 30 9
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Uncertainty Treatment Using Paraconsistent Logic J.I. da Silva Filho et al. IOS Press, 2010 © 2010 The authors and IOS Press. All rights reserved.
1
Initial Comments
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Introduction Control Systems of the Automation and Robotics area, and the Expert Systems utilized in Artificial Intelligence, generally work with conventional Logic or Boolean. In this Logic, also known as Classical Logic, the description of the world is considered only through two states, which at times, are inadequate to portray some real world situations. We know from experience that in the description of the real world, the appearance of inconsistencies and ambiguities is very common and Classical Logic utilizing the law of the excluded third is unable to be applied in these situations, at least directly. Due to the binary structure of these Systems, the way of reasoning is always done with some “simplifications”, disregarding inconsistent facts or situations or then by making a rough summary of these situations, this is because it would take very long to perform a complete description by working with only two states. Because of the necessity of projecting more efficient Expert Systems, capable of considering real situations, which do not fit the binary forms of Classical Logic, various researchers have concentrated their efforts to encounter forms of applicability with alternative Logics to Classical, named Non-Classical Logic. The Non-Classical Logics investigate, among other things, excluded regions of the Classical Logic, which are for example, the existing values that differ from “True” or “False”, permitting a better framing of concepts like lack of definitions, ambiguities and inconsistencies. Paraconsistent Logics belong to the Non-Classical Logic group and were built to find ways to give a non-trivial treatment to contradictory situations. Worldwide, in the main research centers, the theoretical structure of Paraconsistent Logic has been investigated in depth. The results of these researches and their possible applications were presented in several papers, like those referred to in the bibliography. Essentially theoretical analysis and some papers that permit programming using the Paraconsistent Annotated Logic demonstrate that Paraconsistent Logics are better in the framing of problems caused by situations of contradiction, which appear when we deal with descriptions of the real world. Some work on the application of the Paraconsistent Logics through computer programs, as well as projects of gate logics for application in hardware have already been presented; however there is the need to find new forms of adjustments for direct and improved applications.
A.1 Objectives In this book, the methods, which provide the means of application of Paraconsistent Logics, are presented. This will enable new in-depth research line in this area of Non-Classical Logics. We will see that the methods exposed bring very rewarding results in obtaining new technologies capable of promoting efficient ways of
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
2
Initial Comments
treating information mined from uncertain and/or contradictory knowledge, and may be applied in diverse areas of Engineering and Artificial Intelligence, like Robotics, Automation and Expert Systems. Therefore, the objectives of this book may be summarized as follows: 1- Show the application methods of Paraconsistent Logic based on its theoretical structure aiming at practical implementations in Artificial Intelligence. 2- Construct algorithm mined from the theoretical basis of Annotated Paraconsistent Logic to be applied in computer programs of Expert and Control Systems. 3- Propose ways of applying the Algorithm developed on Annotated Paraconsistent Logic in Expert Systems, Automation and Control Systems and logic controllers for Robotics. 4- Study the implications by presenting proposals to project Hybrid Control Systems like joining Annotated Paraconsistent Logic and the theory of Artificial Neural Networks. 5- Implement projects of electronic circuits indicated to accomplish new forms of control in hardware of Automation Systems, Artificial Intelligence, and Robotics by using the fundamentals of Annotated Paraconsistent Logic. 6- Contribute with new results that will serve as reference for future applications of Paraconsistent Logics in computer and electronic Systems, thus offering new ways of treating signals originated from uncertain knowledge.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
A.2 Organization of the book This book was organized to facilitate the understanding of the theory and the development of the methods, which enable the application of Paraconsistent Logic in several areas of knowledge. To have these subjects referring to Paraconsistent Logic, theory, interpretation, and application presented in a sequential fashion, the chapters were divided into three main parts: PART 1: NOTIONS OF PARACONSISTENT ANNOTATED LOGIC (PAL) The first part is composed of two chapters, which bring the summary of the basic theory and fundamentals of Paraconsistent Logic. The studies of the interpretation of Paraconsistent Annotated Logic, which will result in the methods of application, are started in this part. The results of the studies carried out with the theoretical structure of Paraconsistent Annotated Logic are presented, also the first concepts of the methodology capable of finding the values that can translate their theoretical fundamentals into practical ones. Chapter 1 presents Paraconsistent Logic with some of its main concepts, which place it in the family of Non-Classical Logic. One of its classes, called Paraconsistent Annotated Logic (PAL) is also presented. The theoretical principles give birth to the methods and the uncertainty treatment algorithms, which will be presented in later chapters. Chapter 2 presents the method and interpretations procedures that will enable the direct application of Paraconsistent Annotated Logic (PAL) in several fields of da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Initial Comments
3
Artificial Intelligence. The initial procedures studied permit the visualization of the functioning of the Systems or Paraconsistent Analysis Nodes (PANs), which will be the components of Paraconsistent Networks for uncertainty treatments. PART 2: PARACONSISTENT ANALYSIS NETWORKS (PANets) The second part consists of four chapters and brings the utilization methods of Paraconsistent Logic in the formation of analysis networks capable of performing treatment in representative data from uncertain information. Chapter 3 presents the main fundamentals for the application of the Paraconsistent Annotated Logic with annotation of two values (PAL2v) Analysis methodology in uncertainty treatment Systems. The algorithms specially constructed on the concepts of the PAL2v will be studied. These algorithms form the Paraconsistent Analysis Systems, which will be processed in the decision networks with the objective of giving adequate treatment to information originated from Uncertain Knowledge database. Chapter 4 presents the configurations of Systems or Paraconsistent Analysis Nodes PANs, which form networks capable of performing the treatment of information originated from Uncertain Knowledge. The PANs are representations of algorithms obtained through the methods and interpretative procedures of the lattice associated to the Paraconsistent Annotated Logic with annotation of two values (PAL2v). The PANs will be processed forming Networks with configurations for analysis and decision making from information that may be contradictory.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
In chapter 5 the techniques of the modeling of evidence signals applied at the inputs of the Systems or Paraconsistent Analysis Nodes (PANs) are studied. The PANs will compose the Paraconsistent Analysis Networks for decision making. From the expert knowledge, feature values, which will be analyzed by the PANs, will be interpreted as Degrees of Evidence and receive a modeling and treatment to be applied in the network. Chapter 6 presents some Paraconsistent Analysis Network configurations, which use the PAL2v Algorithms for the treatment of data originated from Uncertain Knowledge. Besides the configurations with the Algorithms studied previously, a special configuration, which involves two propositions functioning in tridimentional way is presented. Constructed with a special kind of PAL2v Algorithm, this configuration may be considered as a Paraconsistent Analyzer Cube capable of modeling contradictions in a decision network. PART 3: PARACONSISTENT ARTIFICIAL NEURAL NETWORKS (PANNets) The third and last part is composed of 6 chapters and brings all the fundamentals of the Paraconsistent Artificial Neural Networks with suggestion of architecture for applications. In chapter 7 a comparative study is done between the analysis carried out by the Paraconsistent Analysis Nodes (PANs) and the action of the human brain. Starting from this study, a configuration named Paraconsistent Artificial Neural Cell (PANC) is constructed. This is an algorithm specially developed for being the component of a
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
4
Initial Comments
Paraconsistent Analysis Network. All the cells that compose the family of PANC will be originated from the Algorithm of a standard PANC. In chapter 8, having the Standard Paraconsistent Artificial Neural Cell (sPANC) as a base, a family of Paraconsistent Artificial Neural Cells is constituted. Each component of this PANC family has a different function in the analysis of the signals that will traverse the network. The particular algorithm of each one of these cells is specially configured to perform a determined function in the analysis of the information originated from uncertain data, which may be contradictory. Chapter 9 presents a detailed study of the Learning Paraconsistent Artificial Neural Cell (lPANC). Due to the importance of this cell, given its characteristics of learning and unlearning in a training process, this chapter is dedicated to the study of its functioning and behavior in certain conditions. The test results utilizing training algorithms are presented by means of value tables and graphs, showing more clearly the functional characteristics of the lPANC. In Chapter 10, the Paraconsistent Artificial Neural Units (PANUs) are studied. The PANUs are masses of Paraconsistent Artificial Neural Cells (PANCs) properly interconnected forming blocks with distinct configurations and defined functions. Several types of PANU are presented. Each one has its component cells connected in condition to treat and calculate information signals by means of analysis based on the structured concepts of the Paraconsistent Annotated Logic. The first and most important of them is the Paraconsistent Artificial Neuron called Para-Perceptron.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
In chapter 11, the Paraconsistent Artificial Neural Systems (PANSs) are studied. These are groups of Paraconsistent Artificial Neural Units (PANUs) properly interconnected. The different configurations of the PANUs form blocks with special functional characteristics, which are called Paraconsistent Artificial Neural Systems (PANSs). The PANSs compose different configurations that perform the treatment, analysis, and redirecting of the signals, these are integrating parts of the Paraconsistent Artificial Neural Network (PANNet). Chapter 12 presents a proposal of a Paraconsistent Artificial Neural Network PANNet architecture capable of processing signals with procedures inspired by the functioning of the human brain. The proposed Paraconsistent Artificial Neural Network is totally built with the Paraconsistent Artificial Neural Cells forming Units (PANUs) and Paraconsistent Systems (PANS) that direct, treat, and analyze information signals. Chapter 13 comments the work that has already been developed with the applications of the Paraconsistent Logic and the state of the art in the researches.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Part 1
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Paraconsistent Annotated Logic (PAL)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
This page intentionally left blank
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
7
CHAPTER 1
Basic Notions of Paraconsistent Annotated Logic (PAL) Introduction Paraconsistent Logic is presented in this chapter as a Non-Classical Logic which is able to challenge the basic laws of Classic Logic. The main topics of the Paraconsistent Logic theoretical basis are exposed briefly. Although it is abbreviated, this introduction presents enough equations and demonstrations to prove that Paraconsistent Logic is a Complete Logic, and sufficiently able to support contradiction in its structure without being trivial. Paraconsistent Annotated Logic (PAL) is also presented in this chapter. It is a class of Paraconsistent Logic which brings the propositions accompanied by annotations as Degrees of Evidence or Beliefs. The evidences and theoretical procedures presented, show that the Paraconsistent Annotated Logic is complete and representative in its structure and theory.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1.1 Logic The Logic Science is the base and fundamentals of mathematics; and consequently of all the technology as known today. Many of the scientific theories, which created our modern science, are based on classical logic. Furthermore most electronic equipment and digital systems use the concepts of classical logic as the base for their operation. Research shows that the studies of logic began with the work of Aristotle, a philosopher who lived in the city of Estagira Macedonia (384 to 322 B.C). Aristotle searched for an instrument to understand a real and true world. Later, he had his work and his disciples’ collected into a book called Organon, where the essential part of logic can be found in the chapter Analytica Priora. Aristotle designed a set of rigid rules so that the conclusions could be accepted as logically valid. It was a reasoning based on premises and conclusions like: “every living being is mortal” (premise 1), and it’s verified that “a lion is a living being” (premise 2) therefore, “the lion is mortal” (conclusion). Seen this way, logic may be interpreted as being “the study of the laws of valid reasoning”, that is, the ways of thinking which result in correct and true conclusions. Therefore, from certain statements, there will be ways to infer conclusions, reaching other statements which we are sure to be valid.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
8
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
To investigate and establish relations among these statements, a language was created. In this language, these statements are called sentences or propositions, and can only be qualified as false or true. The classical logical reasoning is based on four principles presented by symbols, often employed in Classical Logic. They are: 1- Principle of Identity S=S Every proposition or object is identical to itself. 2- Principle of propositional Identity S→S Every proposition implies in itself. Sν¬S 3- Principle of Excluded Third From two contradictory propositions, that is one denies the other, one of them is true. 4- Principle of Non-contradiction ¬ (S ν ¬ S) Between two contradictory propositions, one of them is false. Under this reasoning, the classical logic is binary; therefore, one statement is either false or true. It does not admit being partially true and partially false at the same time. With this supposition and the law of non-contradiction, where a statement cannot contradict the other all the possibilities were covered by the laws of Classical Logic, forming thus the base of western logical reasoning. This logical reasoning has been used for many centuries and the progress achieved by mankind has been supported by these simple principles that rule classical logic. Classical logic formal language is, in a certain way, adequate to represent knowledge. Its binary reasoning has adapted well to the operation of some electronic instruments which could operate like an on-off switch among them. The most important was the transistor developed in the 50s. In digital computing systems, the transistors work as on-off switches, which can easily and adequately represent the fundamentals of classical logic by means of electronic circuits. Even with all our technology, the laws of binary classical logic are still the fundamentals used in operation of most instruments and machine control systems. However, the increasing demands in technology market require production equipment to be able to process and control systems adequately under condition never imagined before. The refinement and the greater amount of information about the environment and raw material aim at increasing production, resulting in greater quality and accurateness. To make this possible, the real-world information, which is used in decision making, must be more and more meticulous and closer to reality. These demands limit, in some cases, the application of classical logic which is limited by its own rigid principles. In Artificial Intelligence Systems, Robotics, Automation and Control, it is through the use of reliable information, which portrait the world more precisely that computational programs and electronic circuits are able to make more accurate decisions. It is at this point that the systems, which use only binary classical logic, are unable to present a good response to this necessity of the technological world.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
9
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1.2 The Non-Classical Logic Thorough studies have revealed that not all real-world situations may be classified simply as true or false. When precision is needed to describe something, it is hard to establish limits that allow us to weave affirmative or negative statements concerning the quality of things. The limits between false and true are almost always indefinite, uncertain ambiguous and even contradictory. It is clear that using only binary classical logic, the available technological resources are unable to automize the related activities to problems that involve situations which were not considered in their groundwork. Recent researches in AI aim at incorporating features of human intelligence into analysis systems through algorithm processing. In a number of human experiences concerning decision making, the information on which the decisions are based lead to complex problems because it can not be affirmed categorically that the information is “true” or “false”, “yes” or “no” as demanded by the laws of classical logic. In practice, digital control systems, where the processing is done using classical logic only, find it difficult to overcome the barriers imposed by rigid binary laws. High complexity problems often show up. These are the ones that the systems have to make decisions and solve problems when fed by information which brings a great number of variables of different types. In some cases, the relations among the variables are non-linear, uncertain, inaccurate or inconsistent and carry a large amount of data to be chosen from and to consider the most relevant. When faced with uncertain or contradictory information, the system has little possibility of treating the signals adequately which would allow a good decision to be made. We verified that same not having the breaking or stop of the data processing the time spends for the analysis of the contradictory situations reduces the system efficiency considerably. To give the analysis systems quality, there has been a development in researches on the application of different types of logics to substantiate alternative uncertainty treatment systems and control. Several researches in the area of Computer Science and Artificial Intelligence have been developed with the objective of overcoming these complex problems. Nowadays, in large research centers, there have been studies on digital system projects, which are able to work on new types of logics, whose basic theoretical concepts are more flexible, and therefore adaptable to complex problems found in the area of AI. These studies resulted in the possibility of using other different types of logic in decision-making system projects, which in a certain way, would not bring their fundamentals tied to the rigid laws of classical logic. Non-classical logics were created to give satisfactory answers to these difficult situations treated by binary classical logic. Therefore, the non-classical logics are those that violate the binary suppositions. In these non-classical logics, it is established that the concept of duality is something that must coexist with their opposite to obtain better accurateness in the conclusions for decision making. Roughly speaking, non-classical logics compose two large groups: 1) Those that complement the object of classical logic and 2) Those that compete with classical logic. The logics belonging to the first category are said to be complementary to the classical logic, and as the name says, they complement aspects that classical logic is not able to express. They have classical logic as base, and widen its power of expression. They comprise, as examples, the epistemic logic (logic of beliefs, logic of
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
10
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
knowledge, logic of doubts, logic of justification, logic of preference, logic of decision, logic of acceptance, logic of confirmation, logic of opinion, deodontic logics, etc), the traditional modal logic (system T, system S4, system S5, multimodal systems, etc.), intentional logics, action logics (logic of imperative, logic of decision, etc.), logics for physical applications (temporal logic (linear, non-linear, etc.), chronological logic, space logic, Lésniewski logic, etc.), combinatory logics (related to calculus λ), infinitary logics, conditional logic, etc. In the second group we find the logics that rival classical logic (also called heterodox); they limit or change certain fundamental principles of traditional logic. Motivated mainly by the advances in Artificial Intelligence, a number of heterodox systems have recently been created: Intuitionist Logic (intuitionist logic without negation, Griss logic, etc. Such systems are well established: there is a constituted mathematics and yield interesting philosophical features), non-monotonic logic, linear logics, default logics, defesiable logics, abdutive logics, multivalued logics, or multivalent logics: Lukasiewics logics, Post logic, Gödel logic, Kleene logic, Bochvar logic, etc. The studies are well advanced: there is a constituted mathematics and philosophical importance. They deal with, for instance, the subject of uncertainties, Rough set theory, paracomplete logics (which restrict the Third Excluded Principle), paraconsistent logics (which restrict the non-contradiction logic principle: Cn systems, annotated logics, logic of paradox, discursive logic, dialectical logic, relevant logics, logic of inherent ambiguity, imaginary logics, etc.), Non-Alethic logics, (which are simultaneously paracomplete and paraconsistent), non-reflexive logics (which restrict the principle of identity), self referring logics, labeled logics, free logics, quantic logics, among others. All these studies confirm that the non-classical systems have a deep meaning, not only from the practical point of view, but also theoretical, breaking the paradigm of human reasoning which has been ruling for over two thousand years.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1.3 Paraconsistent Logic Among the number of ideas in non-classical logic, a family of logics has been developed having as its main fundamental the Principle of the Excluded Third, which was named Paraconsistent Logic. Therefore, paraconsistent logic is a non-classical logic which revokes the principle of non-contradiction and admits the treatment of contradictory signals in its theoretical structures.
1.3.1 Historical Aspects of Paraconsistent Logic The exponents of paraconsistent logic were the Polish logician J. Lukasiewicz and the Russian philosopher N. A. Vasilév, who simultaneously, around 1910, independently, suggested the possibility of a logic that would restrict, for instance, the principle of contradiction. Vasilév actually succeeded in handling such logic, naming it imaginary; however none of them had at the time, a wide view of non-classical logic, as we have today.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
11
The first logician to frame a paraconsistent propositional calculus was the Polish S. Jaskowiski, Lukasiewicz’s pupil. In 1948, Jaskowiski published his ideas about logic and contradiction, showing how a paraconsistent sentence calculus could be constructed, once it has convenient motivation. Jaskowiski’s system, which he named discursive logic, was later developed (from 1968) due to the work of authors like J. Kotas, L. Furmanowski, L. Dubikajtis, N.C.A. da Costa e C. Pinter. Thus, a true discursive logic was built, comprising a first order predicate calculus and high order logic. The initial systems on paraconsistent logic, having all logical levels, involving propositional, predicate and description calculus as well as high order logics, are attributed to N.C.A. da Costa (from 1954 on). This happened independently from the work of the mentioned authors. Nowadays, there are even paraconsistent systems of theory, strictly stronger than the classical, although they are considered as strict subsystems and paraconsistent mathematics. The first algebraic versions of paraconsistent systems appeared around 1965 and are known as Curry algebras, named after the American logician H. Curry. The initial semantics of paraconsistent systems were investigated around 1976 and are known as valuation semantics. The term paraconsistent logic was coined in 1976 by F. Miró Quesada, at a conference held during the III Latin American Symposium on Mathematics Logic, at Campinas State University, São Paulo, Brazil. Literally “paraconsistent” means “beyond consistency” With all this work, due to the development of paraconsistent logic, it became possible to manipulate remarkably strong inconsistent information systems and, without the need of eliminating the contradictions or fall into trivialization.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1.3.2
Inconsistent Theories and Trivial Theories
The most important reason for the consideration of paraconsistent logic was to obtain theories, in which inconsistencies are allowed without the risk of trivialization. In logics that are not conveniently distinguished from classical logic, for instance, regarding the concept of negation, in general the following is a valid scheme (¬A → B) (where ‘A’ and ‘B’ are formulas, ‘¬A’ is the negation of ‘A’ e ‘→’ is the implication symbol), ex falso sequitur quodlibet: of a contradiction, all the formula may be deducted, or better, all the formula becomes true. In fact, let’s admit the contradictory formula A and ¬A as premises. As observed above, A → (¬A → B) constitutes a valid scheme. Taking into account the premises presented, by the Modus Ponens deduction rule (from A and from A → B we deduct B) we have (¬A → B). Applying, once again the Modus Ponens rule to this last formula, we obtain B. However, formula B is arbitrary. Thus, from contradictory formulas we may deduct any statement. This is the phenomenon of trivialization. It has been proved in the area of Artificial Intelligence that with the rise of non classical logics, and now, more specifically with paraconsistent logic, new concepts closer to reality have been considered and have been contributing to the development of models and tools able to manipulate contradictions and ambiguities, giving room to the development of new technologies.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
12
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
1.3.3
Conceptual Principles of Paraconsistent Logic
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
A summary of the theoretical principles that support paraconsistent logic may be seen as follows: It known that the statements demonstrated as true in a theory are called theorems and if all the sentences formulated in their language are theorems, it is said to be trivial. It is also known that a theory is consistent if among its theorems there aren’t those that affirm something which is the negation of other theorems in the same theory. In case it happened, the theory would be called inconsistent. Given a theory (deductive) T, settled on logic L, it is said to be consistent if there aren’t such, that one is the negation of the other; in a contrary hypothesis, T denominated inconsistent. The theory T is called trivial if all the sentences (closed formula) in its language are theorems; if this does not happen, T is non-trivial. If L is one of the common logics like the classical, the theory T is trivial if and only if it is inconsistent. In other words, logics like these do not separate the concepts of inconsistency and triviality, because according to classical logic, an inconsistent theory is also trivial, and reciprocally, because if a contradiction is accepted as valid, then any conclusion would be possible. As this is an undesired result, classical logic does not admit the contradiction as an acceptable element without making it trivial. Logic L is called paraconsistent if it can work as the basis of inconsistent and non- trivial theories. This means that, except in certain specific circumstances that go beyond our study, paraconsistent logic is able to manipulate inconsistent information systems without the risk of trivialization. Another significant concept is paracomplete logic. A logic L is called paracomplete if it can be the logic subjacent to theories which violate the law of Excluded Third as follows: from two contradictory propositions, one of them is true. More precisely it is said paracomplete if there are maximum non-trivial systems to which a given formula and its negation do not belong. Finally, a logic L is called Non-Alethic if L is Paraconsistent and Paracomplete.
1.4 Paraconsistent Annotated Logic The Paraconsistent Annotated logics are a family of non-classical logic, initially employed in logic programming by Subrahmanian (Subrahmanian, V.S. “On the semantics of quantitative Logic programs” Proc. 4th. IEEE Symposium on Logic Programming, Computer Society Press, Washington D.C, 1987). Later Blair and Subrahmanian (Blair, H. A. and Subrahmanian, V.S. Paraconsistent Foundations for Logic Programming, Journal of Non-Classical Logic, 5, 2, 45-43, 1988) built a general theory of annotated programming and reached applications in database containing contradictions. Other researches have extended the idea and used it to reason on inheritance networks. A study of the fundamentals of the subjacent logic of the investigated programming languages became convenient, due to the obtained applications. It was verified that it was paraconsistent logic and that in some cases it also contained characteristics of paracomplete logic and Non-Alethic. The first studies on the fundamentals of paraconsistent annotated logic were carried out by Da Costa, Subrahmanian and Vago (Da Costa, N.C.A., Subrahmanian
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
13
V.S., and Vago, C. “The Paraconsistent Logic PT” Zeitschrift fur Mathematische Logik und Grundlagen der Mathematik, Vol.37, pp.139-148, 1991). Some interesting work in this area appeared in the 90s as the one by Abe, J.M (Abe, J.M., Some Aspects of Paraconsistent Systems and Applications, Logique et Analyse, 157(1997), 83-96) where he studied the logic of predicates, model theory, annotated set theory and some modal systems. A systematic study of the fundamentals of annotated theories pointed out in previous work was established. Metatheorems of strong and weak completeness for a subclass of annotated logic was achieved particularly in this work and a systematic study on annotated model theory, generalizing most of the standard results for annotated systems.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1.4.1 Representation of Paraconsistent Annotated Logic (PAL) The knowledge of language theory for the investigation of problems in science is of great importance. A good solution to a question may often depend on the choice or on finding a convenient language to adequately represent the concepts involved, as well as making sensible inferences until we get to satisfactory solutions. Concerning the applications, when we closely observe an information set, on a certain topic we wish to analyze, such set may bring contradictory information and present difficulty describing vague concepts. In the case of contradiction, it is usually either removed artificially to avoid contaminating the data set, or it is treated apart, by using extra logical devices. However, contradiction, most of the times, contains decisive information because it is like the meeting of two lines of opposite truth-values. Hence, neglecting them is behaving in an anachronistic way, and so we must search for languages which coexist with such contradictions, without disturbing the remaining information. As to the concept of uncertainty, we should think of a language that is able to capture ‘the maximum information’ from the concept, obviously. To achieve such language, one must follow the procedures to comprehend the concepts of uncertainty, inconsistency, and paracompleteness in its linguistic structure and reason (mechanically) on them. With this representation, the language should achieve, capture, and better ponder the nuances of reality in different ways, other than the traditional. Hence, we should be equipped with a language and a deductive frame suitable to understand problems under different angles and this enables innovating solutions. Following this thought, for an analysis, which uses the concepts of paraconsistent logic, the existence of inconsistency and paracompleteness is considered. Thus, along with the notions of truth and falsehood, we may think of four objects: T – is called Inconsistent t – is called True F – is called False ⊥ – is called Paracomplete or Indeterminate. Such objects are also called constant of annotation. In the set of these objects τ = {T, t, F, ⊥} we insert a mathematical frame which will be a lattice with operator τ = that may be characterized by the following Hasse diagram in figure 1.1. The operator on τ is: ∼:⏐τ⏐ → ⏐τ⏐ which will operate intuitively, as follows: ∼T = T (the ‘negation’ of an inconsistent proposition is inconsistent)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
14
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
∼t = F (the ‘negation’ of a ‘true’ proposition is ‘false’) ∼F = t (the ‘negation’ of ‘false’ proposition is ‘true’) ∼⊥ = ⊥ (the ‘negation’ of a ‘paracomplete proposition is ‘paracomplete’) The operator ∼ will play the role of negation connective of PAL, as will be seen ahead. The propositions of PAL are of type Pμ where P is a proposition in common sense and μ is a constant of annotation.
T
F
t
⊥ Figure.1.1 Four-Vertex Lattice
Among several intuitive readings, Pμ may be read: ‘I believe in proposition P with degree up to μ’ or ‘The favorable evidence expressed by proposition P is at most μ’.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1.4.2 First Order Paraconsistent Annotated Logic Language In this section we present an axiomatization of annotated logics, extending the previous topic, considering now an arbitrary lattice. It is worth mentioning that annotated logics are paraconsistent logic and in general they are paracomplete and non-Alethic, as it is exposed ahead. Let τ = < |τ|, ≤, ∼> be a finite lattice with a fixed operator. Such lattice is called truth-values lattice and the operator ~ constitutes the “meaning” of the negation symbol ¬ of the logic system which will be considered. We will use Lτ as a symbol of this language. We also have the following symbols associated to lattice τ: • T indicates the maximum of τ; • ⊥ indicates the minimum of τ; • sup indicates the Supremum operation — with respect to subsets of τ; • inf indicates the Infimum operation — with respect to subsets of τ. The language Lτ has the following primitive symbols: 1. Individual variables: a denumerable set of individual variables; 2. For every n, n-ary functional symbols. The 0-ary functional symbols are also called, individual constants; 3. For every n, n-ary predicate symbols;
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
4. 5. 6. 7.
15
Equality symbol = ; Every member of τ is a constant of annotation; The symbols ¬, ∧, ∨, → , ∃ and ∀. Auxiliary symbols (, ) and ,. Language Lτ terms are defined as usual a, b, c and d — with or without indexes — as meta-variables for the terms. Definition 1. [Formula] A basic formula is an expression of the kind p(a1, … , an), where p is n-ary predicate symbol and a1, … , an are Lτ terms. If p(a1, … , an) is a basic formula and μ ∈τ is a constant of annotation, then pμ(a1, … , an) and a = b (where a and b are terms) called atomic formulas. The formulas have the following generalized inductive definition: 1. An atomic formula is a formula; 2. If A is a formula, then ¬A is a formula; 3. If A and B are formulas, then A ∧ B, A ∨ B and A → B are formulas; 4. If A is a formula and x is an individual variable, then (∃x)A and (∀x)A are formulas; 5. An expression Lτ constitutes a formula if and only if it is obtained by applying one of the previous rules 1 to 4. Formula ¬A is read “the negation — or weak negation — of A”; A ∧ B , “a conjunction of A and B”; A ∨ B, “disjunction of A and B”; A → B, “a implication of B by A”; (∃x)A, “an instantiation of A by x”; and (∀x)A, “a generalization of A by x”. Some definite symbols are introduced: Definition 2. [Equivalence and Strong Negation] Let A and B be any formulas of Lτ. We define then: A ↔ B =def (A → B) ∧ (B → A) and ¬*A =def A → ((A → A) ∧ ¬(A → A)). The symbol ¬* is called strong negation; therefore, ¬*A must be read as strong negation of A. The formula A ↔ B is read, as usual, the equivalence of A and B. Definition 3. Let A be a formula. Then: ¬0 A indicates A; ¬1 A indicates ¬A and ¬k A indicates ¬ (¬k-1A), (k ∈ N, k > 0). Also, if μ ∈ ⏐τ⏐, it is agreed that: ~0μ indicates μ; ~1μ indicates ~μ and ~kμ indicates ~ (~k-1μ), (k ∈ N, k > 0). Definition 4. [Literal] Let pμ(a1, … , an), be an atomic formula. Any formula of the kind ¬kpμ(a1, … , an) (k ≥ 0) is called a hiper-literal formula or, simply, literal. The other formulas are called complex formulas. We will now provide a semantic description for languages Lτ. Definition 5. [Structure] A υ structure for a language Lτ consists of the following objects: 1. A nonempty set⏐υ⏐ denominated the universe of υ. The elements of |υ| are called individuals of υ. 2. For every f n-ary function symbol of Lτ, a fυ n-ary operation of |υ| in |υ| — in special, for every individual constant e of Lτ, eυ is an individual of υ.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
16
3.
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
n For every predicate symbol p of n weight of Lτ, a function pυ: |υ| → |τ|.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Let υ be a structure for Lτ. The language diagram Lτ(υ) is obtained as usual. Given a term free from variable a of Lτ(υ), the individual υ(a) of υ is also defined as usual. i and j are used as meta-variables to denote names. We will define now the truth value υ(A) of the closed formula A of Lτ(υ). The definition is obtained by induction about the length of A. Overusing the language; we use the same symbols of diagram language terms for meta-variables. Definition 6. Let A be a closed formula and υ be an interpretation for Lτ. 1. If A is atomic of the kind pμ(a1, … , an),then υ(A) = 1 if and only if pυ(υ(a1), …, (an)) ≥ μ. υ(A) = 0 if and only if it is not the case that pυ(υ(a1), …, (an)) ≥ μ. 2. If A is atomic of the kind a = b, then υ(A) = 1 if and only if υ(a) = υ(b). υ(A) = 0 if and only if υ(a) ≠ υ(b). 3. If A is of the kind ¬k(pμ(a1, … , an)) (k ≥ 1), then υ(A) = υ(¬k-1(p~μ(a1, … , an)). 4. Let A and B be any closed formulas. Then, υ(A ∧ B) = 1 if and only if υ(A) = υ(B) = 1. υ(A ∨ B) = 1 if and only if υ(A) = 1 or υ(B) = 1. υ(A → B) = 1 if and only if υ(A) = 0 or υ(B) = 1. 5. If A is a complex closed formula, then υ(¬A) = 1 - υ(A). 6. If A is of the kind (∃x)B, then υ(A) = 1 if and only if υ(Bx[i]) = 1 for some i in Lτ(υ). 7. If A is of the kind (∀x)B, then υ(A) = 1 if and only if υ(Bx[i]) = 1 for every i in Lτ(υ). Theorem 1. Let A, B, C be any formula of Lτ(υ). The connectives→, ∧, ∨, ¬* together with the quantifiers ∀ e ∃, have all the properties of implication, disjunction, conjunction, and classical negation, as well as of classical ∀ e ∃ quantifiers respectively. For example, we have: 1.
¬*∀xA ↔ ∃x ¬*A
2.
¬*∃xB ∨ C ↔ ∃x (B ∨ C)
3.
¬*∃xB ∨ ∃xC ↔ ∃x (B ∨ C)
4.
¬*∀xA ↔ ¬*∃x ¬*A
5.
¬*∃xA ↔ ¬*∀x ¬*A
The postulate system – axioms schemes and rules of inference – for Lτ(υ) which is presented next will be denominated as Aτ. A, B, C denote any formula, F and G denote complex formulas, p denotes propositional variable and μ, μj 1 ≤ j ≤ n, denote constant of annotation, x, x1, …, xn, y1, …, yn are individual variables. A → (B → A)) (→1) (→2) (A → (B → C)) → ((A→ B) → (A →C)) (→3) ((A→ B) → A) → A A, A → B (Modus Ponens, or simply MP) (→4 B (A ∧ B) → A (∧1)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
17
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
(∧2) (A ∧ B) → B (∧3) A → (B → (A ∧ B)) (∨1) A → (A ∨ B) (∨2) B → (A ∨ B) (∨3) (A → C) → ((B → C) → ((A ∨ B) → C)) (¬1) (F → G) → ((F → ¬G) → ¬F) (¬2) F → (¬F → A) (¬3) F ∨ ¬F (τ1) p⊥ (τ2) (¬kpμ) ↔ (¬k-1p~μ) k ≥ 1 (τ3) pμ → pλ, where μ ≥ λ (τ4) pμ1 ∧ pμ 2 ∧ … ∧ pμn → pμ, where μ = sup μj , j = 1, 2, … , n (∀1) B → A(x) / B → ∀xA(x) (∀2) ∀xA(x) → A(t) (∃1) A(t) → ∃xA(x) (∃2) A(x) → B ∃xA(x) → B (=1) x = x (=2) x1 = y1 →…→ xn = yn → f(x1,…,xn) = f(y1, …, yn) (=3) x1 = y1 →…→ xn = yn → pμ(x1,…,xn) = pμ(y1, …, yn) with the usual restrictions. Theorem 2. Aτ is paraconsistent if and only if #τ ≥ 21. Theorem 3. If Aτ is paracomplete, then #τ ≥ 2. If #τ ≥ 2, there are Aτ systems that are paracomplete and there are Aτ systems that are not paracomplete. Theorem 4. If Aτ is non-Alethic, then #τ ≥ 2. If #τ ≥ 2, there are Aτ systems which are non-Alethic and Aτ systems which are not non-Alethic. Consequently, we see that the Aτ systems are, in general, paraconsistent, paracomplete and non-Alethic. Theorem 5. The calculus Aτ is non-trivial. Further details may be found in the References where we demonstrate correction and completeness theorems for the calculus Aτ when a lattice is finite2.
1.4.3 A single valued Paraconsistent Annotated Logic Let us begin with some definitions for the single valued paraconsistent annotated logic: 1
The symbol # indicates the cardinal number of τ. When the lattice is infinite, due to scheme (τ4) we fall into an infinitary logic, which remains to be investigated. 2
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
18
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
1.4.3.1 Definitions Definition 1. [Strong Negation and Equivalence] Let A and B be any formulas. We define, then: A ↔ B =Def (A → B) ∧ (B → A) and ╖A =Def A → ((A → A) ∧ ¬(A → A)). The symbol ╖ is called strong negation; therefore, ╖A must be read as the strong negation of A. The formula A ↔ B is read as usual, the equivalence of A and B. Definition 2. Let A be a formula. Then: ¬0A indicates A; ¬1A indicates ¬A and ¬kA indicates ¬(¬k-1A), (k ∈ , k > 0). indicates the set of natural numbers. {0, 1, 2, ...} Also, if μ ∈ τ, we have: ~0 μ indicates μ; ~1 μ indicates ~μ and ~k μ indicates ~(~k-1μ), (k ∈ , k > 0). Definition 3. [Literal] If P is a propositional symbol and λ is a notation constant, then the formula ¬... ¬ pλ (abbreviated by ¬kPλ, k ≥ 0) { k − times
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
called hiper-literal (or simply literal) and others are called complex. We will introduce the interpretation concept for PAL. Definition 4 [Interpretation]. Let P be the set of propositional symbols. An interpretation for PAL is a function I: P → ⏐τ⏐. Given as interpretation I we associate a valuation VI: F → {0, 1} defined as: 1. If p ∈ P e μ ∈ ⏐τ⏐, then VI(pμ) = 1 if and only if I(p) ≥ μ and VI(pμ) = 0 if and only if is not the case that I(p) ≥ μ. 2. If A is of the kind ¬kpμ (k ≥ 1), then VI(¬k(pμ)) = VI(¬k-1(p∼μ)). Let A and B be any formulas. Then, 3. VI(A ∧ B) = 1 if and only if VI(A) = VI(B) = 1. 4. VI(A ∨ B) = 1 if and only if VI(A) = 1 or VI(B) = 1. 5. VI(A → B) = 1 if and only if VI(A) = 0 or VI(B) = 1. If A is a complex formula, then 6. VI(¬A) = 1 - VI(A). From condition 1 we have VI(Pμ) = 1 if and only if I(P) ≥ μ, or better, Pμ is true according to interpretation I if the interpretation given to P, I(P), is greater or equal to “my belief value” μ with respect to proposition P. It is false, otherwise. We can show that there are interpretations I and propositions Pμ such that VI(Pμ) = 1 and VI(¬Pμ) = 1, or better, we have true contradictions in this logic. This is intuitive if we consider propositions like P(0.5). Its negation ¬P(0.5) is equivalent to P∼ (0.5) which is also P(0.5). Well, if P(0.5) is true, then it is clear that its negation is also true. If it is false, its negation is also false.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
19
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1.4.3.2 Single valued annotation Lattice associated to Paraconsistent Annotated Logic (PAL1v) As seen before, paraconsistent annotated logic may be represented in a particular way through a Hasse lattice. Intuitively, the constant of annotations on the vertices of the lattice denote extreme logical states to the propositions. The annotation may be composed of 1, 2 or n values, depending on the class of paraconsistent logic used. The lattice associated to Paraconsistent Annotated Logic has the following definition: 1. ⏐τ⏐ is a non-empty finite set. 2. ≤ is a relation of order on ⏐τ⏐ 3. There will always be the Supremum for any two elements of ⏐τ⏐, and there will always be the Infimum for any two elements of ⏐τ⏐ 4. Where the operation ~ is defined, called epistemic negation, which has the same practical and intuitive meaning of negation ¬. As examples, let’s take τ = < |τ|, ≤ > as being a fixed finite lattice, where: 1. ⏐τ⏐ = [0, 1] × [0, 1] 2. ≤ = {((μ1, ρ1), (μ2, ρ2)) ∈ ([0, 1] × [0, 1])2⏐μ1 ≤ μ2 and ρ1 ≤ ρ2} (where ≤ indicates the usual order of real numbers). We may consider that every Degree of Evidence attributed to the proposition is a value which is contained in the set of values composed by constant of annotations of the lattice {T, t, F, ⊥} for which the following relation of order is defined: ⊥< t, ⊥ < F, t < T and f < T. Therefore, the supremum is T and the infimum is ⊥. We also employ the other terminologies, and symbols already seen. We saw that the annotation of paraconsistent annotated logic is defined through and intuitive analysis where the atomic formula Pμ which is read as: “I believe in proposition P with Degree of Evidence at the most μ, or up to μ ( ≤ μ)”, this leads us to consider the Degree of Evidence as being a constant of annotation belonging to the lattice. In this case, every annotated sentence by the lattice would have the following meaning: P(t) = the sentence P is true P(F) = the sentence P is false P(T) = the sentence P is inconsistent P(⊥) = the sentence P is paracomplete or indeterminate The propositional sentence is accompanied by a Degree of Evidence that attributes the connotation of “Truth”, “Falsehood”, of “Inconsistency” or “Indetermination” to the proposition. Therefore, a propositional sentence associated to the lattice of Paraconsistent Annotated Logic is read as follows: PT ⇒ “The annotation or Degree of Evidence T denotes inconsistency to proposition P”. Pt ⇒ “The annotation or Degree of Evidence t denotes truth to proposition P”. PF ⇒ “The annotation or Degree of Evidence F denotes falsehood to proposition P”. P⊥ ⇒ “The annotation or Degree of Evidence ⊥ denotes Indetermination to proposition P”.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
20
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
In Paraconsistent Annotated Logic (PAL) such lattice is called truth value lattice. We consider in each one of its vertices one unique annotation which will represent, in paraconsistent analysis, the Degree of Evidence attributed to the proposition. With this consideration we can study PAL represented by a four-vertex lattice, according to the one in Figure 1.2. In the representation, the propositions are accompanied by annotations, which in their turn attribute the Degree of Evidence corresponding to each propositional variable.
T
t
F
T = Inconsistent F = False t = True ⊥ = Paracomplete or Indeterminate
⊥ Figure 1. 2 Hasse finite Lattice.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
The operator on τ is: ~: |τ | →|τ | which is defined as: ~ ( 1 ) = 0 , ~ ( 0 ) = 1, ~ ( T ) = T , ~ ( ⊥ ) = ⊥. Intuitively, ~ has the “meaning” of the negation of Paraconsistent Annotated Logic and the annotations in this lattice, are valued with a real number in the closed interval [0,1] and follow the rules determined by Hasse diagram.
1.5 Paraconsistent Annotated Logic with annotation of two values (PAL2v) The annotation may be composed of 1, 2, or n values. Therefore, we may achieve a greater representation of how much the annotation, or evidence, can express the knowledge about the proposition P, if instead of the single annotation representation it is formed by an ordered pair. Thus we may use a lattice formed by ordered pairs, in such way that: τ = {(μ, λ ) | μ , λ ∈ [0, 1] ⊂ ℜ}. In this case, an operator ~: |τ| → |τ| is fixed. In the same way, the operator ~ constitutes the “meaning” of logical symbol of negation ¬ of the system which will be considered, and other values of the lattice are: • ⊥ indicates the minimum of τ = (0, 0); • T indicates the maximum of τ = (1, 1); • sup indicates the supremum operation. • inf indicates the infimum operation.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
21
A four-vertex lattice associated to Paraconsistent Annotated Logic with annotation of two values (PAL2v) may be represented according to Figure 1.3. The first element of the ordered pair (μ) represents the Degree in which the favorable evidences support proposition P, and the second element (λ) represents the Degree in which the unfavorable or contrary evidences either negate or reject proposition P. T (1, 1)
P(μ, λ) F
t
(0, 1)
(1, 0)
T = Inconsistent = P(1, 1) F = False = P(0, 1) t = True = P(1, 0) ⊥ = Paracomplete = P(0, 0)
⊥ (0, 0)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 1. 3 Hasse finite lattice
This way, the intuitive epistemological idea of the association of an annotation (μ, λ) to a proposition P means that the favorable Degree of Evidence in P is μ, where as the unfavorable or contrary Degree of Evidence is λ. For example, intuitively, in such lattice we have the annotations: (1, 0) indicating ‘existence of total favorable evidence and null unfavorable evidence’ (0, 1) indicating ‘existence of null favorable evidence and total unfavorable evidence’ (1, 1) indicating ‘existence of total favorable evidence and total unfavorable evidence’ (0, 0) indicating ‘existence of null favorable evidence and null unfavorable evidence’.
1.5.1 PAL2v Language Primitive Symbols The primitive symbols of PAL2v language are: Propositional symbols: p, q, r, ... Connectives: ¬ (negation), ∧ (conjunction), ∨ (disjunction) and → (implication). Every member of τ is a constant of annotation: (μ1, λ1), (λ2, μ2), ... Auxiliary symbols: (,),{,},[ and]. Definition 1. [Expression] An expression is any finite sequence of symbols of its vocabulary. For example, the following are expressions: 1. ¬∧))ppqp 2. → 3. (p(μ1, λ1) ∨ q(μ2, λ2)) 1. 2. 3. 4.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
22
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
Intuitively, the expressions in 1 and 2 are meaningless, where as 3 “means something”. We need, thus, to characterize the relevant expression for our discourse. Such expressions compose the grammar of PAL2v. Definition 2. [Formula] Formulas are obtained from the following generalized inductive definition: 1. If P is a propositional symbol and (μ, λ) ∈ τ is a constant of annotation, then P(μ, λ) are formulas (atomic). 2. If P is a propositional symbol and (λ, μ) ∈ τ is a constant of annotation, then P(λ, μ) is a formula (atomic). 3. If A and B are any formulas, then (¬A), (A ∧ B), (A ∨ B), (A → B), are formulas. 4. An expression constitutes a formula if and only if it is obtained by the application of the previous rules. Intuitively the formula P(μ, λ) is read as: “I believe in P with favorable evidence up to μ and unfavorable evidence up to λ. The formula (¬A) is read “the negation — or weak negation — of A”; (A ∧ B), “conjunction of A and B”; (A ∨ B), “disjunction of A and B”; (A → B),” implication of B by A”.
1.5.2 Considerations on lattice Associated to Paraconsistent Annotated Logic with annotation of two values (PAL2v)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Consider then, Hasse lattice with two-valued annotation, where: τ = {(μ, λ ) | μ , λ ∈ [0, 1] ⊂ ℜ}. If P is a basic formula, the operator ~ : |τ | → |τ | is now defined as: ~ [(μ, λ)] = ( λ, μ ) where μ , λ ∈ [0, 1] ⊂ ℜ, we consider (μ, λ) as an annotation of P. The PAL2v is associated to the four-vertex lattice as represented in Figure 1.4 (a) and the corresponding annotation, composed of two values is presented in the vertices of the lattice (Figure 1.4(b)). T
F
(1, 1)
t
⊥ Extreme values (a)
(1, 0)
(0, 1)
(0, 0) (favorable Degree of Evidence, unfavorable Degree of Evidence) (b)
Figure 1.4 Representative lattice of Paraconsistent Annotated Logic with annotation of two values (PAL2v).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
23
We verify that in each one of its vertices there is a symbol corresponding to extreme logical states, as studied previously. We may relate the extreme logical states represented in the four-vertex lattice with the values of favorable and unfavorable Degree of Evidence: PT = P(1, 1) ⇒ The annotation, composed of favorable and unfavorable Degree of Evidence attributes to proposition P an intuitive reading that P is inconsistent. Pt = P(1, 0) ⇒ The annotation, composed of favorable and unfavorable Degree of Evidence attributes to proposition P an intuitive reading that P is true. PF = P(0, 1) ⇒ The annotation, composed of favorable and unfavorable Degree of Evidence, attributes to proposition P an intuitive reading that P is false. P⊥= P(0, 0) ⇒ The annotation, composed of favorable and unfavorable Degree of Evidence, attributes to proposition P an intuitive reading that P is indeterminate.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Considering the values in the signals which now compose the annotations, through an analysis we can get to the value of the resulting logical states. --------------------------Example 1.1 Consider the information, originated from the analysis of two specialists, on the health of a patient. It will compose the annotation to proposition P ≡“The patient is stricken by pneumonia”. Assume that specialist 1 produces the value referring to favorable Degree of Evidence and specialist 2 produces the value referring to unfavorable Degree of Evidence. Comment on each one of the annotation built by the information from the specialists, according to the following: Annotation 1 = (1, 0) Annotation 2 = (0, 1) Annotation 3 = (1, 1) Annotation 4 = (0, 0) Resolution: For annotation (1, 0), the intuitive reading will be “the patient is stricken by pneumonia with total favorable evidence”. Considering the proposition, this establishes a true logical state, once specialist 1 presents favorable Degree of Evidence μ = 1 to compose the annotation, and specialist 2 presents unfavorable Degree of Evidence λ= 0 to compose the annotation (μ, λ). For annotation (0, 1), the intuitive reading will be “the patient is stricken by pneumonia with total unfavorable evidence”. Considering the proposition, this establishes a false logical state, once specialist 1 presents favorable Degree of Evidence μ = 0 to compose the notation, and specialist 2 presents unfavorable Degree of Evidence λ= 1 to compose the annotation (μ, λ). For annotation (1, 1), the intuitive reading will be “the patient is stricken by pneumonia with totally contradictory favorable and unfavorable evidence values”. Considering the proposition, this establishes an inconsistent logical state, once specialist 1 presents favorable Degree of Evidence μ = 1 to compose the annotation, and specialist 2 presents unfavorable Degree of Evidence λ= 1 to compose the annotation (μ, λ).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
24
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
For annotation (0, 0), the intuitive reading will be “the patient is stricken by pneumonia with favorable and unfavorable evidence Degree of Evidence, totally contradictory”. Considering the proposition, this establishes a Paracomplete or Indeterminate logical state, once specialist 1 presents favorable Degree of Evidence μ = 0 to compose the annotation, and specialist 2 presents unfavorable Degree of Evidence λ= 0 to compose the annotation (μ, λ). --------------------------Example 1.2 Let P≡ be a proposition “the weather will be rainy tomorrow”. The information has come from two weather forecast Institutes. Assume Institute A as the generator of favorable evidence, and comment each one of the annotations generated by the information from the Institutes. First day = (0.9, 0.3) Second day = (0.3, 1.0) Third day = (0.7, 0.8) Resolution: For the first day where we read P(0.9, 0.3), it is described as: “I believe the weather will be rainy tomorrow with favorable evidence up to 90% and unfavorable evidence up to 30%”. A certain Degree of Contradiction is presented. For the second day where we read P(0.3, 1.0), it is described as: “I believe the weather will be rainy tomorrow with favorable evidence up to 30% and unfavorable evidence up to 100%”. A certain Degree of Contradiction, different from the previous day is presented. For the third day where we read P(0.7, 0.8), it is described as: “I believe the weather will be rainy tomorrow with favorable evidence up to 70% and unfavorable evidence up to 80%”. A certain Degree of Contradiction, different from the two previous days is presented. --------------------------We see that, in case we have a conflicting belief (inconsistent) the values of favorable and unfavorable Degrees of Evidence may vary in the closed interval between 0 and 1, and we may get different Degrees of Contradiction. This can happen if, for example, Institute A forecasts rainy weather for tomorrow, but Institute B predicts good weather for tomorrow by using other methods. --------------------------Example 1.3 Consider the information from the two Institutes of the previous examples referring to proposition P ≡ “The weather will be rainy tomorrow”, where we read p(0.5, 0.5). Describe the meaning of this annotation. Resolution: The description will be: “I believe the weather will be rainy tomorrow with favorable evidence up to 50% and unfavorable evidence up to 50%”. We can see that in case we have an undefined belief, the values of Favorable and Unfavorable Degrees of Evidence fail to express something about the proposition. We call this Degree of Indefinition. ---------------------------
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
25
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1.5.3 The Logic Negation of PAL2v Let P(μ, λ) be an annotated proposition, then: ¬ P(μ, λ) = P( λ, μ) Take, for example, proposition P “The student passed the exam”. The annotation that express knowledge about this proposition come with a total favorable evidence and null unfavorable evidence in the annotated form as follows: P(1, 0) Analyzing its logic negation we notice that it is the same as saying “I believe the student passed the exam with null favorable evidence and total unfavorable evidence, that is: ¬P(1, 0) ↔ P∼ (1, 0 ) ↔ P(0, 1 ). Thus, it is easy to notice also that ¬P(μ, λ) is equivalent to P(λ, μ), which in turn is equivalent to P∼(μ, λ). Therefore, the negation of P(μ, λ) is the same proposition P with inverted Degrees of Evidence in the annotation P(λ, μ). --------------------------Example 1.4 Let P be the proposition “The patient is stricken by the flu” whose annotation originated from two specialists is: (0.7, 0.2). For this annotation, describe its meaning and its logic negation. Resolution: We have then the annotation and its logic negation: Annotation = (0.7, 0.2) Logic negation =(0.2, 0.7) We read P(0.7, 0.2) ↔ ¬P(0.2, 0.7) as “I believe the patient is stricken by the flu with favorable evidence up to 70% and unfavorable evidence up to 20%” is equivalent to saying that it is not the case that “I believe the patient is stricken by the flu with favorable evidence up to 20% and unfavorable evidence up to 70%”. --------------------------There is, thus, a natural operator defined on τ, which plays the role of the annotated logic negation connective: ∼: ⏐τ⏐ → ⏐τ⏐, ∼(μ, λ) = (λ, μ). This denotes an important property in PAL2v: we can consider the propositions ¬P(μ, λ) and P( λ, μ) equivalent or better, in other terminology, ¬P(μ, λ) ↔ P∼ (μ, λ). --------------------------Example 1.5 Let the proposition P ≡ “The student passed the exam”, whose annotation comes from the student’s information and from his classmates. Describe the meaning of its logic negation for each of the following annotation. (1, 0) (1, 1) (0, 0) (0.5, 0.5) Resolution: An intuitive reading of the negation of proposition P(1, 0) is “I believe the student passed the exam with null favorable evidence and total unfavorable evidence”. An intuitive reading of the negation of proposition P(1, 1) is “I believe the student passed the exam with total favorable evidence and total unfavorable evidence”, in this case the negation of an inconsistent proposition continues to be inconsistent. An intuitive reading of the negation proposition P(0, 0) is “I believe the student passed the exam with null favorable evidence and null unfavorable evidence”, or better, in this case the negation of a paracomplete or indeterminate proposition is still paracomplete.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
26
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
An intuitive reading of the negation of proposition P(0.5, 0.5) is “I believe the student passed the exam with favorable evidence up to 50% and unfavorable evidence up to 50%”, or better, it is the same connotation to the proposition. --------------------------In this last case we have an indefinite belief and its negation is the same indefinite proposition. --------------------------Example 1.6. Let P≡ “The patient is stricken by pneumonia” be a proposition. Consider that specialists have commented and describe the meaning of its logic negation for each of the following negation. (0.9, 0.3) (0, 1) (1, 1) Resolution: An intuitive reading of the negation of proposition P(0.9, 0.3) is “I believe the patient is stricken by pneumonia with favorable evidence up to 30% and unfavorable evidence up to 90%”. An intuitive reading of the negation of proposition P(0, 1) is “I believe the patient is stricken by pneumonia with total favorable evidence and null unfavorable evidence. (= I believe the patient is stricken by pneumonia)”. An intuitive reading of the proposition P(1, 1) is “I believe the patient is stricken by pneumonia with favorable evidence up to 100% and unfavorable evidence up to 100%”. Thus, in this case we have a completely conflicting belief (inconsistent) and its negation is the same inconsistent proposition. ---------------------------
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1.6 Final Remarks In this chapter, we have presented the main concepts of Paraconsistent Annotated Logic which will be fundamental for the construction of Paraconsistent Analysis Systems Uncertainty Treatment and for the construction of Paraconsistent Artificial Neural Networks. In the brief description of Paraconsistent Logic history we saw that NonClassical Logics are characterized by widening, in some way, the traditional logic or by violating or limiting its principles or basic fundamentals. The representation of Paraconsistent Annotated Logic (PAL) associated to a lattice allows the annotations on its vertices to establish the logical states, and thus be valued and equated. This kind of PAL representation in the associated lattice shows an interpretation and thus, in an intuitive way, we may set operators like the one of logic negation, studied in this chapter. In the next chapter we will present the analysis method which enables the interpretation and valuation of logical states represented in the two-valued annotation lattice of Paraconsistent Annotated Logic. This way, we may associate regions of the lattice with logical states and extract equations. These equations provide values for decision making. These procedures will allow the construction of an algorithm based on the concepts of Paraconsistent Logic called “Para-Analyzer”. Using the concepts of
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
27
PAL2v, we can build a computational model able to treat information signals which may come impregnated with contradictions. The algorithms which will be studied in the following chapters will give rise to Paraconsistent Analysis Systems or Nodes (PANs). They will compose the uncertainty treatment networks. The concepts obtained with the methodology that compose the PANs will build the Paraconsistent Artificial Neural Cells (PANCs) which will be used to compose Paraconsistent Artificial Neural Networks (PANnet).
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Exercises 1.1 Where and when did Classical Logic appear? 1.2 What are the three basic principles of Classical Logic? 1.3 Why was the formal language of Classical Logic inadequate to represent knowledge, despite its binary reasoning? 1.4 Describe the situations found in the real world that make it difficult to interpret Control Systems and Expert Systems that use Classical Logic in decision making. 1.5 Describe why there are difficulties for an expert system to adequately treat information originated from Uncertain Knowledge when it uses Classical Logic. 1.6 What is understood by trivialization? Give an example. 1.7 Give a definition to Paraconsistent Logic. 1.8 What is a Para-complete Logic? 1.9 Why is it important to consider contraction when treating Uncertain Knowledge? 1.10 Describe the basic characteristics of the Paraconsistent Annotated Logic with annotation of two values PAL2v. 1.11 What does “Degree of Evidence” mean in PAL2v? 1.12 How are the annotations in the lattice associated to PAL2v represented? 1.13 What does the term “annotation” mean in PAL2v? Give an example of a annotation in PAL2v. 1.14 Consider that exams done on a patient x required by doctor M1 make you affirm that “Patient x has 72% probability of being stricken by a serious disease y”. Describe the Proposition and its annotation using the symbols of Paraconsistent Annotated Logic of single annotation. 1.15 Supposing that the exams done on patient x required by doctor M1 make you affirm that “Patient x has 50% probability of being stricken by a serious disease y”. Describe the proposition and its notation using the symbols of Paraconsistent Annotated Logic of single annotation. 1.16 Consider that the exams done on patient x through a device P1 provide the doctor with information that makes him affirm that “Patient x has 68% probability of being stricken by a serious disease y”. Describe the proposition and its annotation using the symbols of Annotated Paraconsistent Logic of single notation. 1.17 Suppose that a patient receives the diagnostic from doctor M1 as follows: “The results of the exams lead me to affirm that you have 68% probability of being stricken by pneumonia”. Not satisfied with the diagnostic, the patient looks for doctor M2, who after analyzing the exam affirms: “The exam results lead me to affirm that you have 43% probability of being stricken by pneumonia”. Describe the probability of being stricken by pneumonia”. Describe a Proposition and the annotation of this analysis using the symbols of Paraconsistent Annotated Logic with annotation of two values PAL2v.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
28
Chapter 1. Basic Notions of Paraconsistent Annotated Logic (PAL)
1.18 Consider that a patient receives the diagnostic from doctor M1 as follows: “The results of the exam required lead me to affirm that you have 80% probability of being stricken by pneumonia”. Not satisfied with the diagnostic the patient looks for doctor M2, who, after analyzing it affirms: “The results of the exams lead me to affirm that you have 80% probability of being stricken by pneumonia. Describe a Proposition and the annotation of this analysis using the symbols of Paraconsistent Annotated Logic with annotation of two values PAL2v. 1.19 Consider that a patient receives the diagnostic from doctor M1 as follows: “The results of the exams lead me to affirm that you have 50% probability of being stricken by pneumonia”. Not satisfied with the diagnostic, the patient looks for doctor M2, who, after analyzing the exam, affirms: “The results lead me to affirm that you have 50% probability of being stricken by pneumonia”. Describe a Proposition and the annotation of the analysis using the symbols of Paraconsistent Annotated Logic with annotation of two values PAL2v. 1.20 Wishing to express his knowledge about the possibility of an epidemic of a particular disease in a certain region, doctor M1 says “there’s 50% evidence of it occurring”. a) Consider this statement and describe the proposition and annotation according to the concepts of PAL2v. b) Describe if there is or not the existence of contradiction in the doctor’s statement. 1.21 Regarding the previous item, consider that doctor M1 affirms: “There’s 80% evidence that an epidemic may happen in this region”. a) Describe the proposition with a annotation according to the basic fundamentals of PAL2v. b) Describe if there is or not the existence of contradiction in the doctor’s statement. 1.22 Wishing some other source of information about the probable epidemic, consider that another doctor M2 has affirmed: “There’s 75% evidence of an epidemic happening in this region”. a) Describe the proposition with the annotation according to the basic fundamental of PAL2v. b) Describe if there is or not the existence of contradiction in the doctors’ statements. 1.23 Consider that exams done on a patient x through device P1 provided the doctor with information to affirm that: “Patient x has 56% probability of being stricken by a serious disease y”. Not satisfied with the reading of device P1, the doctor required new exams from device P2 which gave the information: “Patient x has 86% probability of being stricken by a serious disease y”. Describe the proposition and its annotation using the symbols of PAL2v.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
29
CHAPTER 2
Paraconsistent Annotated Logic Application Methodology Introduction In this chapter we present the analysis method that enables the interpretation and the consequent valuation of logical states represented in the lattice of Paraconsistent Annotated Logic with annotation of two values (PAL2v). The interpretation procedures and analysis shown in this chapter will permit the construction of algorithms which are able to treat the signals extracted from Uncertain Knowledge Database. In these first considerations, the interpretation methodology is utilized for the development of an algorithm we call Para-Analyzer. The algorithm receives information as Degree of Evidence to make the paraconsistent analysis. These pieces of information may or may not be contradictory and describe the lattice representative of PAL2v, delimiting regions which are symbolized by their proximity, or not to the four vertices of the lattice. The values found are interpolated in the lattice identifying the region in which the resulting points of the analysis for decision making are found.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
2.1 Paraconsistent Logic in Uncertain Knowledge Treatment In the area of Artificial Intelligence, to construct control or experts systems that make decisions by observing the environment, one must investigate real-world phenomena. The pieces of information extracted from these investigations will be of use to make predictions about their behaviors and, thus, the systems are determined to verify the truth or falsehood of the premises. When control systems are forced to describe real-world situations, due to a number of factors, all the information needed for the analysis come impregnated with noise which give them a certain Degree of uncertainty. In the analyses carried out, based on information obtained in non ideal conditions, we say that the systems deal with Uncertain Knowledge. Therefore, the specialized literature defines Uncertain Knowledge as the one which is debatable. Some measures of uncertainty are associated to this knowledge. These measures describe beliefs for which there are certain support evidences. The characteristics of an evidential logic are suitable for treating Uncertain Knowledge, mainly because, in an analysis, the argumentations are restrained to assert that the premises constitute only partial evidences for their conclusions. The degree of credibility or belief that premises grant the conclusion is considered to make the analysis. In practice, it is the job of scientific researches to determine the premises, and the validity or non-validity of the argumentation is determined by a logical study.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
30
Chapter 2. Paraconsistent Annotated Logic Application Methodology
The application method of paraconsistent logic aims at implementing Logical Systems of Decision through computational programs that manipulate and reason with representative signal of Uncertain Knowledge information, which may be inconsistent. The methodology presented here allows applications of Paraconsistent Annotated Logic, and will first explore the fact that it has characteristics of an Evidential Logic. Thus, the annotation will be regarded as Degree of Evidence, and the analysis will be done considering the value of the information coming from real and uncertain sources. With the interpretation and the methods utilized in this chapter, we will be able to apply Paraconsistent Annotated Logic with annotation of two values (PAL2v) using an algorithm called Para-Analyzer. This algorithm will delimit several regions in the PAL2v lattice and determine: four extreme logical states (established according to the previous chapter as being: True, False, Inconsistent and Indeterminate), an internal logical state called Indefinite, and other intermediary states whose quantity will depend on the desired accurateness in the analysis.
2.2 Algebraic interpretations of PAL2v
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
According to the previous chapter, the signals and information for the application of Paraconsistent Annotated Logic (PAL) come as annotations or Degrees of Evidence related to a given proposition. We have also seen that Paraconsistent Annotated Logic (PAL) is framed in propositional formulas which are accompanied by annotations. In its representation, each annotation belongs to a lattice τ and attributes values to its corresponding propositional formula. The Favorable Degree of Evidence is symbolized by μ and the Unfavorable Degree of Evidence by λ. In the representative lattice, shown in fig 2.1 (a) and 2.1 (b) the symbol T (inconsistent), is associated to the upper vertex, and the symbol ⊥ (Paracomplete or Indeterminate), is associated to the lower vertex, the letter F (false) is associated to the left-hand side and the letter t (true) is associated to the right-hand side. T
(1, 1)
t
F
(1, 0)
(0, 1)
⊥
(0, 0)
Logical Extreme Values (a)
Annotation (μ, λ) (b)
Figure 2.1 Paraconsistent Annotated Logic representative lattices.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 2. Paraconsistent Annotated Logic Application Methodology
31
In the study of Paraconsistent Annotated Logic with annotation of two values (PAL2v), the annotation is composed of an ordered pair where: one value represents favorable evidence to proposition P and a second value represents the contrary or unfavorable evidence to proposition P. 2.2.1 The Unitary Square on the Cartesian Plane (USCP) Some algebraic interpretations are made for a better representation of a annotation in PAL2v, and to find an interpretation methodology in its representative lattice τ which allows the use of Paraconsistent Logic in the treatment of uncertainties. These studies involve a Unitary Square on the Cartesian Plane (USCP) and a representative lattice of PAL2v. Initially, a Cartesian coordinate system for the plane is adopted, and thus the annotation of a given proposition will be represented by points in the plane. We call Unitary Square on the Cartesian Plane (USCP) the lattice τ with the coordinate systems as proposed in Fig 2.2. λ (0, 1)
(1, 1)
(0, 0) (1, 0)
μ
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 2.2 Unitary Square on the Cartesian Plane (USCP)
The values of Favorable Degree of Evidence μ are displayed on x-axis, and the values of Unfavorable Degree of Evidence λ on y-axis. For each coordinate system the annotations for τ (Favorable Degree of Evidence μ, Unfavorable Degree of Evidence λ) are identified with different points in the plane. Thus, we associate T to (1, 1), ⊥ to (0, 0), F to (0, 1) and t to (1, 0). 2.2.2 Algebraic Relations between the USCP and PAL2v Lattice In the system of Fig 2.2, the annotation (μ, λ) may be identified with the point in the plane in another system. As a coordinate system may be established for τ, we define, then, transformations between the USCP and , which will be lattice τ in another coordinate system. Just like it was done in USCP, in this lattice , we may associate T to (0, 1), ⊥ to (0, -1), F to (-1, 0) and t to (1, 0). Thus, the intended lattice will have the following coordinate system, as shown in Fig 2.3.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
32
Chapter 2. Paraconsistent Annotated Logic Application Methodology
y (0, 1)
(-1, 0)
x
(1, 0)
(0, -1)
Figure 2.3 : Lattice τ in a new coordinate system.
For each coordinate system adopted, we can see that the annotations (μ, λ) of τ are now identified with different points in the plane. We may, then, consider one more coordinate system which may be established for τ. We then define transformations between USCP and . Thus, may be obtained from USCP through three phases; redimentioning, rotation, and translation, as follows: 1.
Redimentioning to
2 (according to fig 2.4) y
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
y
(0, 1)
(0,
(1, 1)
2
2
(
)
,
2
)
T1
(0, 0)
(1, 0) x
(0, 0) (
2 , 0)
Figure 2.4 Redimentioning of the USCP to 2 .
This increase is given by linear transformation: ⎡ 2 ⎢ T1(x, y) = ( 2 x, 2 y) ; whose matrix is: ⎣0
0 ⎤ ⎥ 2⎦
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
x
Chapter 2. Paraconsistent Annotated Logic Application Methodology
2.
33
45o rotation in relation to the origin (according to fig 2.5). y
y
(0,
2
(0, 2)
(
)
2
,
2
) T2 (1, 1)
(-1, 1)
x
(0, 0) (
2 , 0)
(0, 0)
x
Figure 2.5 45o rotation in relation to the origin.
This rotation in relation to the origin is given by linear transformation: ⎛ 2 2 2 ⎞ 2 T2(x, y)= ⎜⎜ 2 x − 2 y , 2 x + 2 y ⎟⎟ ; ⎝ ⎠
3.
whose matrix is:
2 2 2 2
−
2⎤ ⎥ 2 ⎥ 2 ⎥ 2 ⎥⎦
Translation given by transformation y
y Copyright © 2010. IOS Press, Incorporated. All rights reserved.
⎡ ⎢ ⎢ ⎢ ⎢⎣
(0, 1)
(0, 2)
T3
(-1, 1)
(-1, 0)
(1, 1)
x (0, 0)
(0, 0)
(0, -1)
Fig 2.6 – Translation of values between USCP and the PAL2v lattice.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
(1, 0)
x
34
Chapter 2. Paraconsistent Annotated Logic Application Methodology
When we make the composition T3 ө T2 ө T1, we obtain the transformation represented by the following equation: T(x, y) = (x-y, x+y-1)
(2.1)
2.2.2.1 The Degree of Certainty DC and the Degree of Contradiction Dct Having the transformation equation (2.1) T(x, y) = (x-y, x+y-1); we can convert points from USCP, which represent annotation of τ, into points of , which also represent annotations of τ. Relating the transformation components T(x, y) according to the usual nomenclature of PAL2v we have: T(x, y) = T(μ, λ) From the first term obtained in the ordered pair of the transformation equation (2.1) we have: x -y = μ - λ We call it Degree of Certainty DC. Therefore, the Degree of Certainty is obtained by: DC = μ - λ
(2.2)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Where: μ = Favorable Degree of Evidence λ = Unfavorable Degree of Evidence The values of the Degrees Certainty DC belong to set ℜ, vary in the closed interval +1 and -1, and are in the horizontal axis of the lattice we call “Degrees of Certainty Axis”. When DC results in +1, it means that the resulting logical state of the paraconsistent analysis is True, and when DC results in -1, it means that the resulting logical state of the analysis is False. From the second term obtained in the ordered pair of the equation (2.1) we have: x + y - 1 = μ + λ -1 We call it Degree of Contradiction Dct. Therefore, the Degree of Contradiction is obtained by: (2.3) D ct = μ + λ – 1 Where: μ = Favorable Degree of Evidence λ = Unfavorable Degree of Evidence The values of the Degree of Contradiction Dct belong to set ℜ, vary in the closed interval +1 and -1, and are in the vertical axis we call “Degrees of Contradiction Axis”. When Dct results in +1, it means that the resulting logical state of the paraconsistent analysis is Inconsistent, and when Dct results in -1, it means that the resulting logical state of the analysis is Indeterminate. ---------------------------
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 2. Paraconsistent Annotated Logic Application Methodology
35
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Example 2.1 We know that according to the fundamentals of PAL2v, the Favorable Degree of Evidence μ and Unfavorable Degree of Evidence λ are values that belong to the set of real numbers and are in the interval [0, 1]. When this maximum variation occurs, determine the maximum value of Degree of Certainty DC and the maximum value of Degree of Contradiction Dct. Resolution: μ may vary from 0 to 1 λ may vary from 0 to 1 Utilizing equation (2.2) we have: ⇒ DC = 0 For μ = 0 and λ = 0 DC = 0 – 0 ⇒ DC = -1 For μ = 0 and λ = 1 DC = 0 – 1 ⇒ DC = +1 For μ = 1 and λ = 0 DC = 1 – 0 ⇒ DC = 0 For μ = 1 and λ = 1 DC = 1 – 1 Utilizing equation (2.3) we have: For μ = 0 and λ = 0 D ct = 0 + 0 – 1 ⇒ Dct = -1 For μ = 0 and λ = 1 D ct = 0 + 1 - 1 ⇒ Dct = 0 For μ = 1 and λ = 0 D ct = 1 + 0 - 1 ⇒ Dct = 0 For μ = 1 and λ = 1 D ct = 1 + 1 - 1 ⇒ Dct = +1 --------------------------Example 2.2 Consider two values of Degrees of Evidence represented in the USCP: The first value is the Favorable Degree of Evidence μ equal to 0.85 and the second is the Unfavorable Degree of Evidence λ equal to 0.4. Bearing in mind they are values that belong to the set of real numbers and are in the interval [0,1] determine the corresponding Degrees of Certainty DC and of Contradiction Dct which are in the PAL2v representative lattice. Resolution: Utilizing the equation (2.2) we calculate the Degree of Certainty DC: ⇒ DC = 0.45 For μ = 0.85 and λ = 0.4 D C = 0.85 – 0.4 Utilizing the equation (2.3) we calculate the Degree of Contradiction D ct: For μ = 0.85 and λ = 0.4 D ct = 0.85 + 0.4 - 1 ⇒ Dct = 0.25 Therefore: DC = 0.45 and Dct = 0.25 will be in the PAL2v representative lattice. ---------------------------
2.2.2.2 Obtaining the Degree of Evidence from DC and Dct With the Degrees of Certainty DC and Degree of Contradiction Dct we can obtain the values of the annotations represented by a Favorable Degree of Evidence μ and unfavorable Evidence λ. Being the following linear transformations F1, F2 and F3 the respective inverse of T1, T2 and T3 : ⎛ 2 2 ⎞ which is linear, and whose matrix is: 1. F ( x, y ) = x, y 1
⎜⎜ ⎝ 2
2
⎟⎟ ⎠
⎡ 2 ⎢ ⎢ 2 ⎢ ⎢ 0 ⎣
⎤ 0 ⎥ ⎥ 2⎥ ⎥ 2 ⎦
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
36
2.
Chapter 2. Paraconsistent Annotated Logic Application Methodology
⎛ 2 2 2 2 ⎞ F2 ( x, y ) = ⎜ x+ y, − x+ y⎟ ⎜ 2 ⎟ 2 2 2 ⎝ ⎠ ⎡ 2 ⎢ ⎢ 2 ⎢− 2 ⎢ ⎣ 2
which is also linear, and whose matrix is: 2⎤ ⎥ 2 ⎥ 2⎥ ⎥ 2 ⎦
We get the transformation by making the composition F1 ө F2 ө F3: 1 1 1 1 1⎞ ⎛1 F ( x, y ) = ⎜ x + y + , − x + y + ⎟ 2 2 2 2 2 2⎠ ⎝
Changing x by μ and y by λ the inverse transformation F(x, y) is: 1 1 1 1 1⎞ ⎛1 F ( μ, λ ) = ⎜ D C + D ct + , − D C + D ct + ⎟ 2 2 2 2 2⎠ ⎝2
(2.4)
The point T(μ, λ) obtained by transformation T is now represented by the Degrees of Certainty and Contradiction obtained from which may be considered a lattice associated to PAL2v, according to the fundamentals of PAL2v seen previously. Therefore: T(μ, λ) = F(x,y) = (DC, Dct). We see that by having T and F, we may work with either the USCP or in the lattice since we can find values for the lattice in the USCP, and vice-versa, through T and F, according to the following figure:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
y (0, 1)
y
(0, 1)
T(x, y)= (μ -λ, μ+λ-1)
(1, 1)
(-1, 0)
(0, 0)
(1, 0)
(0, 0)
x 1 1 1 1 1⎞ ⎛1 F ( μ, λ ) = ⎜ D C + D ct + , − D C + D ct + ⎟ 2 2 2 2 2⎠ ⎝2
(0, -1)
Fig 2.7 – Conversion of values between USCP and the PAL2v representative lattice.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
(1, 0)
x
Chapter 2. Paraconsistent Annotated Logic Application Methodology
37
Through the transformations T and F we can work either in the Unitary Square on the Cartesian plane or in the lattice τ, for the values represented in the former can be represented in the latter, and vice-versa. --------------------------Example 2.3 Consider that two values corresponding to: Degree of Certainty equal to 0.4 and Degree of Contradiction equal to 0.2 are indicated in the PAL2v representative lattice. Determine the values of the Degrees of Evidences that generated these results. Resolution: We know that DC = 0.4 and Dct = 0.2 the values, of the Degrees of Evidence are obtained through equation (2.4): 1 1 1 1 1⎞ ⎛1 F ( μ, λ ) = ⎜ × 0.4 + × 0.2 + , − × 0.4 + × 0.2 + ⎟ 2 2 2 2 2⎠ ⎝2 F (μ, λ) = (0.8, 0.4) Therefore, the values that generated the Degrees of Certainty and Contradiction were: Favorable Degree of Evidence μ = 0.8 Unfavorable Degree of Evidence λ = 0.4 ---------------------------
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
2.2.2.3 Logical Operations in the Unitary Square on the Cartesian Plane USCP and in the Lattice of PAL2v By definition, in PAL2v, a paraconsistent value is represented by an atomic formula P(μ, λ), where the annotation is represented by the Favorable μ and Unfavorable λ Degrees of Evidence to proposition P, respectively. Other logical operations in USCP are defined with the paraconsistent values as follows: Given two paraconsistent values P1 and P2 in USCP, with coordinates (x1, y1) and (x2, y2), we define ¬P1 (negation) as being the coordinate point: (y1, x1) We also define P1 ∨ P2 (disjunction) as being the paraconsistent value of coordinates: ( min { x1 , x2 } , max { y1 , y2 } ) and P1 ∧ P2 (conjunction) with the following coordinate: ( max { x1 , x2 } , min { y1 , y2 } ). Once we define these operations in USCP, through T, we can rewrite directly in τ. These operations in τ are as follows: Given two paraconsistent values P1 and P2 in τ, we have: a) ¬τP1 = T(¬F(x1,y1))= T(a,b)=(b,a). b) P1 ∨τ P2 = T(F(P1) ∨ F(P2)) = T((a,b) ∨ (c,d)) = T(max{a,c}, min{b,d}) = (max{a,c} – min{b,d} , max{a,c} + min{b,d} –1). c) P1 ∧τ P2 = T(F(P1) ∧ F(P2)) = T((a,b) ∧ (c,d)) = T( min{a,c}, max{b,d}) = (min{a,c} – max{b,d} , min{a,c} + max{b,d} –1) with: a=
1 1 1 -1 1 1 -1 1 1 x1 + y1 + ; b = x1 + y1 + ; c = 1 x 2 + 1 y 2 + 1 ; d = x 2 + y2 + . 2 2 2 2 2 2 2 2 2 2 2 2
Thus, we can work with the paraconsistent values and these three operations either in USCP or in τ. Next, we define one more operation with paraconsistent values.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
38
Chapter 2. Paraconsistent Annotated Logic Application Methodology
Let’s consider a collection of n paraconsistent values a1, a2 ..., an with the respective coordinates: (x1,y1), (x2,y2), ..., (xn, yn) in USCP. We define a sum ⊕ in the following way:
a1 ⊕ a2 ⊕ ... ⊕ an =
⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝
n
n
⎞
∑ x ∑ y ⎟⎟ . i
i =1
n
i
,
i =1
n
⎟ ⎟ ⎟ ⎠
It is easy to see that a1 ⊕ a2 ⊕ ... ⊕ an is a value in USCP, that is, the operation is closed in USCP. We can define a new operation ⊕τ in τ for a1, a2,..., an in the following way: First, let’s take a’i as being F(ai) for every i ∈ {1,…,n}. This way, the sum ⊕τ may be defined as follows: Let’s consider a1, a2,..., an paraconsistent values in τ. We define: a1 ⊕R a2 ⊕R ... ⊕R an = T( a’1 ⊕...⊕ a’n ) and, by solving it, we have:
a1 ⊕R a2 ⊕R ... ⊕R an =
⎛ n ⎜ ∑ xi ⎜ i =1 ⎜ n ⎜ ⎜ ⎝
⎞ ⎛ n ⎞ ⎜ ∑ yi ⎟ + 1 ⎟ ⎟ ⎝ i =1 ⎠ , − 1⎟ n ⎟ ⎟ ⎠
(2.5)
We find here a set of operations in USCP and τ, where the operations in τ were naturally defined through the transformation F and T.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
2.2.3
Geometric Relations between the USCP and the PAL2v Lattice
The representation of the values of Degrees of Evidence in the on the Cartesian Plane allows an easy visualization of the Degrees of Contradiction D ct and of Certainty DC in the PAL2v representative lattice. Therefore, from the values of Favorable and Unfavorable Degrees of Evidence in the axes x and y, respectively, we may do a geometric analysis to obtain the Degrees of Certainty DC and Contradiction Dct. Figure 2.8 shows this representation. According to what was seen in the mathematical analysis, from the Unitary Square on the Cartesian Plane (USCP), we can calculate the Degree of Certainty D C through equation (2.2): D C = μ - λ. In the same way we can calculate the Degree of Contradiction D ct through equation (2.3): D ct = μ + λ - 1 According to the geometric interpretation seen before, in a Paraconsistent Analysis System which utilizes PAL2v, the values of the Degrees of Contradiction and of Certainty are related to the distance between the interpolation points of the Favorable and Unfavorable Degrees of Evidence (μ, λ) to the line segments BD and AC in the USCP. In a decision making model in AI, where the inputs are the Degrees of Evidence, it is through the values obtained from the Degrees of Contradiction and of Certainty that the system will make decisions. As the Degrees of Evidence are only valued in the closed interval between 0 and 1, it was seen by the equations that the result of the Degree of Contradiction will vary from –1 to +1. These results may be confirmed by a simple verification of USCP in the previous figure, where we verify that the value of Dct corresponds to the distance of the interpolation point between the Favorable and Unfavorable Degree of Evidence (μ , λ) to the line segment that connects point D = (1, 0) -True to point B = (0, 1) False.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 2. Paraconsistent Annotated Logic Application Methodology
39
C = Inconsistent T Dct = +1 DC = -1
(1, 1)
y
C = (1, 1)
B = (0, 1) Unfavorable Degree of Evidence λ
(0, 1) DC = +1
Dct = -1
(1, 0)
B = False F
D = True t
x A = (0, 0)
Favorable Degree of Evidence μ
D = (1, 0)
(0, 0) A = Indeterminate ⊥
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Fig 2.8 – Geometric Representation of the Unitary Square on the Cartesian Plane and the PAL2v Lattice.
The value Dct = –1 which happens in point A = (0, 0) represents a maximum negative contradiction and the value Dct = +1 which happens in point C = (1, 1) means we have a maximum positive contradiction. In practice, the sensors that portrait real situations of Uncertain Knowledge when they present values that result in these Degrees of Contradiction are bringing completely contradictory information. The closer the interpolation points between the favorable and unfavorable degrees of evidence (μn, λn) get to the line segment BD the less the Degree of Contradiction will be. This reduction of Dct represents a lesser contradiction among the input information. It is still seen in the equation that when the sum of the Favorable and Unfavorable Degrees of Evidence (μ + λ) is equal 1, the Degree of Contradiction is zero, and the interpolation point (μ , λ) will be in line BD. In this case, as the Degree of Contradiction Dct is equal to 0, there is no contradiction among the input signals. Thus, it indicates that the evidences concerning the analyzed proposition do not contradict. A simple look at the USCP shows that the Degree of Certainty DC corresponds to the distance of the interpolation point between the Favorable and Unfavorable Degree of Evidence to the line segment that connects A=(0, 0) Indeterminate and point C=(1, 1) Inconsistent. The value of DC = –1, which corresponds to point B = (0, 1), means intuitively that we have a maximum certainty in the negation of the Proposition. On the other hand value DC = +1, which corresponds to D = (1, 0), means intuitively that we have a maximum certainty in the affirmation of the proposition. The closer the interpolation points between the Favorable and Unfavorable Degrees of Evidence (μn , λn) get to the line segment AC, represented in the USCP, the lesser the Degree of Certainty will be. This reduction of DC represents a lesser certainty among the input information, because it means a greater coincidence between the Favorable and Unfavorable Degrees of Evidence in respect to the proposition. According to equation (2.3) we see that the value of the Degree of Certainty DC may vary from -1 to +1. In any situations,
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
40
Chapter 2. Paraconsistent Annotated Logic Application Methodology
when the Favorable Degree of Evidence is equal to the Unfavorable Degree of Evidence (μ = λ) it results in a Degree of Certainty zero and the interpolation point will be in the line segment AC. In these cases, due to the existing contradiction there will be no certainty but only Indefinition among the signals. In practice, when the sensors of paraconsistent analysis come with values that result in these Uncertainty Degrees, they bring consistent information about the proposition analyzed. These values of Degrees of Certainty DC and of Contradiction Dct, express the proximity or not of the interpolation points with the vertices of the PAL2v lattice. In an AI analysis project these values offer condition to make more precise and accurate decisions.
2.2.3.1 Representation of the lattice associated to PAL2v constructed with values of Degrees of Contradiction and Certainty We saw that the Degrees of Certainty DC and Degrees of Contradiction Dct may be calculated by through two equations (2.2) and (2.3), originated from the mathematical analysis of the Degrees of Evidence displayed in a on the Cartesian Plane (USCP).
Degree of Contradiction Axis Dct = μ + λ - 1 T +1
B = (0, 1) F
C = (0, 1) T
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Unfavorable Degree of Evidence
F -1
t +1 Degree of Certainty Axis DC = μ - λ
λ A = (0, 0 ) ⊥
Favorable Degree of Evidence μ
D = (1, 0)
t
-1 ⊥
Fig 2.9 Representation of the certainty and contradiction axes of the PAL2v lattice with values.
For all the possible values of Degrees of Evidence, the resulting values, obtained from the Degrees of Uncertainty DC are on the horizontal line segment of the lattice associated to Paraconsistent Annotated Logic. These values, displayed horizontally will compose the axis which we call Degree of Certainty axis. In this way, for all the possible values of the Degrees of
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 2. Paraconsistent Annotated Logic Application Methodology
41
Evidence, the resulting values of the Degrees of Contradiction Dct will compose the axis we call Contradiction Degree axis. We can make some considerations about the results of a paraconsistent analysis with the value of Degrees of Contradiction and of Certainty which form the representation of the lattice associated to PAL2v. First, let’s take from the horizontal or certainty axis two arbitrary and external limit values called: V CCS = Certainty Control Superior Value VCCI = Certainty Control Inferior Value These two values will determine, in the analysis when the resulting Degree of Certainty is high enough to consider the analyzed proposition as completely True or completely False. The decision making in a Paraconsistent Analysis System related to the certainty axis will be considered under the following verifications: a) The Certainty Control Superior Value VCCS will give bearable minimum positive measure of the resulting logical state True. b) The Certainty Control Inferior Value VCCI will give the bearable minimum negative measure of the resulting logical state False. c) The values between the positive measure of the Superior Certainty Control and the negative measure of the Inferior Certainty Control will be considered Indefinite. Likewise, let’s take from the vertical or Contradiction axis two arbitrary external limit values we call:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
VctCS = Contradiction Control Superior Value. VctCI = Contradiction Control Inferior Value. These two values with determine how high the Degree of Contradiction is; in such a way that we can consider the proposition as completely Inconsistent or completely Indeterminate. The decision making in a Paraconsistent Analysis System related to the Contradiction axis will be considered under the following verifications: a) The Contradiction Control Superior Value VctCS will give the bearable maximum positive measure of the resulting logical state Inconsistent. b) The Contradiction Control Inferior Value VctCI will give the bearable maximum negative measure to the resulting logical state Indeterminate. c) The value above the maximum positive measure of the Contradiction Control Superior Value and below the maximum negative measure of the Contradiction Control Inferior Value will be considered Indefinite. Figure 2.10 shows these considerations. With the procedures done in the USCP and, last in the lattice, we can make an interpretation of the Paraconsistent Annotated Logic by means of value equations. These procedures enable the construction of Algorithms and the creation of Paraconsistent Analysis Systems through computational processing.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
42
Chapter 2. Paraconsistent Annotated Logic Application Methodology
CctCS Contradiction Control Superior Value
Degree of Contradiction Axis Dct = μ + λ - 1
+1
VCCI Certainty Control Inferior Value
T Degree of Certainty Axis DC = μ - λ
F
t -1
+1 VCCS Certainty Control Superior Value
VctCI Contradiction Control Inferior Value
-1
⊥
Figure 2.10 Representation of the lattice of PAL2v with limit control adjustable values indicated in the axes.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
2.3 The Para-Analyzer algorithm The PAL2v representative lattice can be divided or delimited internally into several regions of different sizes and shapes with the calculus of values the axes that compose it, obtaining thus a discretization. These delimited regions are related to the resulting logical states, which on their turn will be obtained by the interpolations of the Certainty Degrees DC and Degree of Contradiction Dct. Hence, for every interpolation point between the Degrees of Certainty and of Contradiction, there will be a unique delimited region in the lattice that is equivalent to a logical state resulting from the analysis. The number of delimited region into which the lattice will be divided depends on the intended precision of the analysis. As an example, figure 2.11 shows a representation of the PAL2v lattice constructed with values of Degrees of Certainty and of Contradiction, sectioned in 12 regions. Thus, in the analysis, we will get one of the 12 possible resulting logical states as an answer for decision making. In this representation, we can verify that besides the known logical states situated in the four vertices of the lattice, which are called extreme logical states, each of the eight internal logical states (or non-extreme) will receive a name and a symbol according to the proximity to the extreme states of the corresponding vertices. Bearing in mind the previous figure, we can make a description of the lattice, obtaining thus an algorithm of PAL2v we call “Para-Analyzer”. The Para-Analyzer Algorithm describes the delimited regions through the values of the Degrees obtained by equating and compares them to the limit values to obtain analysis of the information which may be inconsistent.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 2. Paraconsistent Annotated Logic Application Methodology
43
Degree of Contradiction Dct
+1 VCCS = C1
T
VctCS = C3 T →F
T →t
QF →T
Qt →T
QF → ⊥
Qt → ⊥
F
-1
Degree of Certainty DC
t +1
⊥ →t
⊥ →F
VctCI = C4 ⊥
VCCI = C2 -1
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Fig 2.11 Representation of the PAL2v lattice sectioned in 12 delimited regions originating 12 resulting logical states.
We consider the logical states represented by the regions occupying the vertices of the lattice: True, False, Inconsistent and Indeterminate, called Extreme Logical States. The logical states represented by internal regions in the lattice, which are not close to the vertices of the lattice are called Non-Extreme logical states. We have then a representation of the four Extreme Logical states and the eight Non-Extreme Logical states that compose the lattice with their corresponding denominations, where: The extreme logical states are: T ⇒ Inconsistent F ⇒ False ⊥ ⇒ Indeterminate t ⇒ True And the non-extreme logical states: ⊥ → F ⇒ Indeterminate tending to False ⊥ → t ⇒ Indeterminate tending to True T→ F ⇒ Inconsistent tending to False ⇒ Inconsistent tending to True T→ t Qt →T ⇒ Quasi-true tending to Inconsistent QF → T ⇒ Quasi-false tending to Inconsistent
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
44
Chapter 2. Paraconsistent Annotated Logic Application Methodology
QF → ⊥ ⇒ Quasi-False tending to Indeterminate Qt → ⊥ ⇒ Quasi-true tending to Indeterminate The input variables values are represented by: μ ⇒ Favorable Degree of Evidence λ ⇒ Unfavorable Degree of Evidence and the relation values: Dct ⇒ Degree of Contradiction, where: D ct = μ + λ - 1 with 0 ≤ μ ≤ 1 and 0 ≤ λ ≤ 1 DC ⇒ Degree of Certainty, where: with: 0 ≤ μ ≤ 1 and 0 ≤ λ ≤ 1 DC = μ - λ The control variable for optimization resources are: VCCS ⇒ Certainty Control Superior Value. VctCS ⇒ Contradiction Control Superior Value. VCCI ⇒ Certainty Control Inferior Value. VctCI ⇒ Contradiction Control Inferior Value. A description is made among the inputs and outputs involved in the analysis process with all the variables and values related with the PAL2v lattice. As a result of the various descriptive sentences a Para-Analyzer algorithm is presented for implementation in computational programs.
2.3.1 The Paraconsistent Annotated Logic with annotation of two values “Para-Analyzer” algorithm VCCS = VCCI = VctCS = VctCI =
C1 C2 C3 C4
*/Definitions of the Values*/ */ Definition of the Certainty Control Superior Value */ */ Definition of the Certainty Control Inferior Value */ */ Definition of the Contradiction Control Superior Value */ */ Definition of the Contradiction Control Inferior Value */ */Input variables */
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ λ */Output variables */ Discrete output = S1 Analogical output = S2a Analogical output = S2b */Mathematical expressions */ being: 0 ≤ μ ≤ 1 and 0 ≤ λ ≤ 1 Dct = μ + λ - 1 DC = μ - λ */determination of extreme logical states*/ For D C ≥ C1 then S1 = ≤ For DC C2 then S1 = then S1 = For D ct ≥ C3 then S1 = For D ct ≤ C4 */determination of non-extreme logical states*/ ≤ For 0 DC < C 1 and 0 ≤ Dct < C3 ≥ Dct then S1 = Qt →T if D C else S 1 = T → t
t F T ⊥
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 2. Paraconsistent Annotated Logic Application Methodology
C4 < Dct ≤ 0 then S1 = Qt→ ⊥ else S 1 = ⊥ → t and C4 < Dct ≤ 0 C 2 < DC ≤ 0 if |D C | ≥ | Dct | then S1 = QF → ⊥ else S 1 = ⊥ → F ≤ 0 and 0 ≤ Dct < C3 C2 < DC ≥ if |D C | Dct then S1 = QF→ T else S 1 = T → F Dct = S2a DC = S2b */ END*/
For
0
≤ DC < C 1
45
if D C
For
For
and
≥ | Dct |
2.4 Para-Analyzer algorithm Application In a Paraconsistent Analysis System, the attributions of values to Favorable and Unfavorable Degrees of Evidence aim at supplying an answer to the problem of Contradictory signals. This is done by collecting evidences and by analysis using the Para-Analyzer Algorithm. The system will try to change its behavior so that the intensity of the contradictions diminishes. As the Favorable and Unfavorable Degrees of Evidence values vary between 0.0 and 1.0, we may get, the values of the Degrees of Contradiction and of Certainty as an answer at anytime. We will know the certainty about the proposition and if there is contradiction or not through the extent of these values considered as outputs
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Input
Paraconsistent Analysis PAL2v
μ
Favorable Degree of Evidence
Input
T Output Resulting Logical State
λ
F
t
Unfavorable Degree of Evidence ⊥
Fig 2.12 Paraconsistent Analysis Basic System
Utilizing Para-Analyzer Algorithm, the system may also generate a decision based on one of the 12 logical states obtained as output by comparing the control values and the values of the Degrees of Certainty and of Contradiction.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
46
Chapter 2. Paraconsistent Annotated Logic Application Methodology
In the practical procedure of PAL2v application, the Favorable and Unfavorable Degrees of Evidence are considered as system input information and the logical states represented internally and in the vertices of the lattice are the outputs resulting from the paraconsistent analysis. Roughly speaking a Paraconsistent Control System that uses a Para-Analyzer Algorithm works basically in the following way, where the paraconsistent analysis is done in three phases: 1- The system receives the information. Generally these values come from sensor or from experts where the values underwent a normalization process, therefore: The pieces of information are two independent and variable values: a) The Favorable Degree of Evidence, which is a real value between 0.0 and 1.0. b) The Unfavorable Degree of Evidence, which is a real value between 1.0 and 0.0. 2- The system does the processing. Utilizing the equations: a) Dct = μ + λ - 1 to find the Degree of Contradiction value to find the Degree of Certainty value b) DC = μ - λ 3- The system concludes. Utilizing the conditionals: a) If there in a high Degree of Contradiction then there is no certainty about the decision, therefore new evidences must be searched. b) If there is a low Degree of Contradiction then we can formulate a conclusion, once there is a high Degree of Certainty.
Paraconsistent Analysis
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Input μ1
Para-Analyzer Algorithm T
F
t
Input λ ⊥ Limit Values VCCS VCCI VctCS VctCI
Degree of Certainty DC Degree of Contradiction Dct
L O G I C A L S T A T E S
Conditional Analysis Decision
Fig 2.13 Representation of a basic paraconsistent analysis system using the PAL2v lattice sectioned in 12 delimited regions.
We should bear in mind that this high Degree of Contradiction and of Certainty may be positive or negative, that is, these values must be considered in a module, and
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 2. Paraconsistent Annotated Logic Application Methodology
47
the limits that define what high or low is a decision that depends exclusively on the limit values established by external adjustments. The Para-Analyzer Algorithm was first used in the Paraconsistent Logic Controller of the mobile autonomous robot Emmy. In this project, the paraconsistent analysis generated the conditions for a decision making in relation to the deviation and obstacles to traffic in non-structured environments. For the Robot’s paraconsistent system to make an analysis it receives two values: Favorable Degree of Evidence and Unfavorable Degree of Evidence, with which it calculates the values of Degrees of Certainty DC and the Degrees of Contradiction Dct. From the results obtained with these two values, the Paraconsistent Controller sets the determination of the logical states represented by the 12 regions of the lattice.
Obstacle detectors ultrasonic sensors. Paraconsistent Logical Controller. Para-Control
Parasonic System
External adjustment to optimize the lattice regions. Performance Microprossessor.
Power Circuits. Decodifiers.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Power Supply.
Figure 2.14 Mobile Robot Emmy – Featuring main parts
In this application, for the Paraconsistent Controller to capture information about the presence of obstacles in its course, it uses a circuit that transforms distance measures into electric tension values through the use of two ultra-sound sensors synchronized by a microprocessor. The sensor circuit captures and presents, at the output, two tension signals that vary from 0 to 5 volts. The signal that represents the Favorable Degree of Evidence μ varies the amplitude of electric tension proportionally to the distance of the Robot and the obstacle, and the representative signal of the amplitude inversely proportional. Therefore, the two signals represent the Favorable Degree of Evidence and the Unfavorable Degree of Evidence which refer to the proposition “There is obstacle ahead”. In the Paraconsistent Logic Controller, the values of μ and λ are considered as inputs. They are equated and result in DC and Dct which are obtained as analogical
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
48
Chapter 2. Paraconsistent Annotated Logic Application Methodology
values. A binary word composed of 12 digits is also generated. In the binary word, each active digit corresponds to the output resulting logical state. With the values of the Degree of Evidence and Contradiction calculated, the Controller selects one of the logical states among the 12 in the lattice as output for a decision-making. The decisionmaking for swerving away from an obstacle is done based on the results obtained by the Para-Analyzer Algorithm. For the Degree of Certainty close to +1 and Degree of Contradiction close to 0, the point interpolated by the two values is in the region located close to the vertex that represents the logical state true. Therefore, the analysis affirms that is obstacle ahead, confirming the Proposition. In this case, the decision is provide a deviation. For a Degree of Certainty close to -1 and Degree of Contradiction close to 0, the point interpolated by the two values will be in the region located close to the vertex that represents the logical state False. Therefore, the analysis affirms there is no obstacle ahead contradicting the Proposition. In this case, the decision is to allow the robot to go on ahead. When in both situations described above, the Degree of Contradiction presents values close to +1 or -1; it means that the interpolation point between D C and Dct will be located in regions distant from the states true and false, therefore the decision is to try to reduce the contradictions making the robot go ahead more slowly or swerve from obstacles with angles of different degrees. Other applications of the Para-Analyzer Algorithm were carried out in several knowledge fields with significant results. More details can be found in the references.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
2.5 Final Remarks In this chapter we studied an interpretation method of the PAL2v lattice. This methodology is represented by values of the Degrees of Certainty and of Contradiction. This transforms the lattice of a theoretically representative symbol of Logic into a mathematical tool which allows comparisons and calculus even with uncertain information. This enabled the creation of the Para-Analyzer Algorithm which translates the paraconsistent analysis by equating the values of favorable and Unfavorable Degrees of Evidence into the values of Degrees of Contradiction and of Certainty. Thus, this methodology can be utilized to create systems able to treat uncertainties, through programming language or through hardware. In the description of the figure that represents the PAL2v lattice, the input signals are analyzed by the Para-Analyzer Algorithm resulting in 12 logical states, which in practice, may be considered as a binary word of 12 digits. At the end of the analysis only one of the 12 logical states will be resultant. This means that only one bit of the word will be active, signaling thus a decision making. The Para-Analyzer Algorithm may be considered as a Paraconsistent Analysis System where the inputs are the Favorable and Unfavorable Degrees of Evidence. In this Paraconsistent Analysis System the values of the Degrees of Certainty D C and of Contradiction Dct will be the outputs, which may be utilized in a continuous control. Therefore, all the analysis process may be optimized through the limit values that determine the shape of representative regions of logical states in the lattice. The application of Paraconsistent Annotated Logic with a lattice composed of two-valued notations provides condition for new control resources, with better computational possibilities to improve significantly the performance of the
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 2. Paraconsistent Annotated Logic Application Methodology
49
computational program. Thus paraconsistent analysis algorithm may be developed and applied to AI Systems. Using the paraconsistent analysis with the methodology studied, where the annotation may be interpreted as evidences, the contradictory information which were rejected before, have now an important role in decision making. In the next chapter we will see how the representative figure of the lattice associated to PAL2v may be interpreted to form new, more precise algorithms able to with uncertain information with a suitable treatment by means of Paraconsistent Analysis Network.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Exercises 2.1 How is uncertain knowledge defined? 2.2 Why does Evidential Logic seem to be more appropriate to treat Uncertain Knowledge? 2.3 How will PAL2v act to analyze information extracted from Uncertain Knowledge? 2.4 In the PAL2v methodology, the Favorable Degree of Evidence μ and the Unfavorable Degree of Evidence λ are values on the Unitary Square on the Cartesian Plan belong to the set of real numbers within interval [0,1]. Determine the maximum values obtained from the Degrees of Certainty DC and Contradiction Dct in the representative figure of the lattice associated to PAL2v. 2.5 Suppose that in the Unitary Square on the Cartesian Plane there are two values of Degree of Evidence where: Favorable Degree of Evidence μ is equal to 0.75 and Unfavorable Degree of Evidence λ is equal to 0.45. Bearing in mind they belong to the set of real numbers and that are in the interval [0,1], determine the corresponding Degrees of Certainty DC and of Contradiction Dct which will be in the representative figure of the lattice associated to PAL2v. 2.6 Suppose that in the Unitary Square on the Cartesian Plane there are two values of Degrees of Evidence where: favorable Evidence μ is equal to 0.25 and unfavorable Evidence λ is equal to 0.85. Bearing in mind they belong to the set of real numbers and that are in the interval [0,1], determine the corresponding Degrees of Certainty D C and the Contradiction Dct which will be in the representative figure of the lattice associated to PAL2v. 2.7 Suppose that in the Unitary Square on the Cartesian Plane there are two values of Degrees of Evidence where: favorable Evidence μ is equal to 0.5 and unfavorable Evidence λ is equal to 0.5. Bearing in the mind they belong to the set of real numbers and that are in the interval [0,1], determine the corresponding Degrees of Certainty D C and the Contradiction Dct which will be in the representative figure of the lattice associated to PAL2v. 2.8 Suppose a patient receives the diagnosis from doctor M1 as follows: __ “The results of the required exams lead me to affirm you have 82% probability of having pneumonia”. Dissatisfied with the diagnosis the patient looks for doctor M2, who after analysis the exams, affirms: __“The results of the required exams lead me to affirm you have 82% probability of not having pneumonia”. Consider the information from doctor M1 as favorable Evidence on the USCP and the information from doctor M2 as unfavorable evidence on the USCP for the Proposition “The patient has pneumonia” and determine the corresponding Degrees of Certainty DC and of Contradiction Dct which are on the representative figure of the lattice associated to PAL2v.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
50
Chapter 2. Paraconsistent Annotated Logic Application Methodology
2.9 Suppose a patient receives a diagnosis from doctor M1 as follows: __ “The results of the required exams lead me to affirm you have 72% probability of not having pneumonia”. Dissatisfied with the diagnosis the patient looks for doctor M2, who after analyzing the exams affirms: __ “The results of the required exams lead me to affirm you have 76% probability of having pneumonia”. Consider the information from doctor M1 as unfavorable Evidence on the USCP and the information from doctor M2 as favorable evidence on the USCP for the proposition. “The patient has pneumonia” and determine the corresponding Degrees of Certainty DC and of Contradiction Dct which are on the representative figure of the lattice associated to PAL2v. 2.10 Suppose that a patient receives a diagnosis from doctor M1 as follows: __ “The results of the required exams lead me to affirm you have 57% probability of having pneumonia”. Dissatisfied with the diagnosis the patient looks for doctor M2, who after analyzing the exams affirms: __“The results of the required exams lead me to affirm you have 96% probability of not having pneumonia”. Consider the information from doctor M1 as unfavorable Evidence on the USCP and the information from doctor M2 as favorable evidence on the USCP for the proposition. “The patient doesn’t have pneumonia” and determine the corresponding Degrees of Certainty DC and of Contradiction Dct which will be on the representative figure of the lattice associated to PAL2v. 2.11 Suppose that in the representative figure of the lattice associated to PAL2v there are two values: Degree of Certainty equal to 0.5 and Degree of Contradiction equal to 0.1. Determine the values of the Degrees of Evidence that generated these results. 2.12 In a paraconsistent analysis utilizing the representation of PAL2v lattice with delimited regions, describe each one of the following values, what they mean, and what they are used for: VCCS ⇒ Certainty Control Superior Value VctCS ⇒ Contradiction Control Superior Value VCCI ⇒ Certainty Control Inferior Value VctCI ⇒ Contradiction Control Inferior Value 2.13 Number the differences between an Analysis System designed with the fundamentals of Classical Logic and a System based on PAL2v. 2.14 In the Para-analyzer Algorithm, what are the outputs that may be used for decision making? 2.15 What are Extreme logical states? What are Non-extreme logical states? 2.16 What is the basic operation of a System that utilizes PAL2v for the analysis and signal treatment? 2.17 Develop in language C, or another common computational language, a program that does paraconsistent analysis with two input signals through the Para-Analyzer Algorithm.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Part 2
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Paraconsistent Analysis Networks (PANet)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
This page intentionally left blank
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
53
CHAPTER 3
Fundamentals of Paraconsistent Analysis Systems
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Introduction With the technological advance it has become impossible to solve the problems of inconsistencies by simply ignoring them, or by considering them refuted as false or confirmed as true as it is done by Classical Logic. We know the Real world is not like that, because physical statements, for example, may be true in a situation and false in another. In scientific research, when we abandon the “right logical truths” which, when brought to more precise Reality, do not correspond to the facts. It always brings the idea that truth is something cumulative; therefore, its truth and falsehood may be marked by Degrees of Evidence. There will be cases when the proposition may be true and the “inferences” are illegitimate. Therefore, valid arguments may have true or false “conclusions”. The argument validity does not guarantee the truth of the conclusion. With these considerations, the logical rational process must not ignore the considerations, but try to withdraw information from them which may be relevant for decision making. Thus, Paraconsistent Annotated Logic may be a good tool to treat data originated from Uncertain Knowledge. In this chapter we will present new algorithms obtained from the methodology exposed in the previous chapter. We can construct systems or Paraconsistent Analysis Nodes (PANs) for decision making. Each algorithm represents a PAN, which may reveal different approaches for treating uncertain signals. They will be utilized according to the designed configurations in decision and control paraconsistent networks.
3.1 Uncertainty treatment Systems for decision making Decision-making Systems that deal with uncertain knowledge must be able to represent, manipulate, and communicate data regarded as imprecise, inconsistent, partially ignored, and even incomplete information. The presence of uncertainty data in a knowledge-based system may be caused by the many sources of information. Among them, we may cite those we know display partial reliability, those that present imprecision as to the representation language in which the information is expressed, those that do not offer information completeness, and those that join or summarize information from several sources. Knowledge-based systems are often faced with uncertain knowledge because the database they deal with is rarely complete or exact; therefore the information treatment project must be prepared to deal with such adverse situations.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
54
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
In the area of signal treatment originated from uncertain knowledge there are several formal models available for the treatment of uncertainties. However, in many cases, these processes have been done through approaches based on representations and combinations of rules which are not subsidized by well-fundamented theory neither by well-defined semantics. Among the most traditional existing approaches for modeling and treating uncertainties, we find: 1. Bayes rule; 2. Modified Bayes rule; 3. Certainty factor, based on the confirmation theory; 4. Dempster-Shafer Theory; 5. Possibility Theory; 6. Default Reasoning; 7. Endorsements Theory; 8. Rough Set Theory. A decision-making system must be robust enough and well-fundamented to respond to theoretical criteria. They must be supported by an adequate theory of uncertainties that enables any verification within determined limits, regardless of the application domain. Based on these considerations, an uncertainty evaluation system must follow a few criteria, which can be classified as follows: 1- The system must be able to generate results which allow a good interpretation: The results of uncertainty treatment must be significant, clear, and precise enough to justify the conclusions. Therefore, the results must be exposed clearly and precisely so that the system can conclude and set off the corresponding actions. Clearness and precision will allow the system to combine the results and update values. 2-The system must be able to deal with imprecision: Uncertainty treatment must be able to model partial or incomplete ignorance of limited or conflicting information, as well as imprecise statements of uncertainty. 3- The System must enable calculus of uncertain values: In the computation of uncertainty treatment results there must be rules to combine values, update them after the evidence of new information, and use them to calculate other uncertainties, allowing conclusions which are capable of offering subsidies for decision making. 4- The System must be able to supply consistent results: In uncertainty treatment, the system must supply methods that verify the consistency of all the uncertainty statements, and of all default suppositions. The calculus rules must guarantee that all the conclusions are consistent with all the statements and suppositions supported by the uncertainty treatment method used. 5- The System must be able to present good computability of the data involved: In the uncertainty treatment the values must be computable so that the systems can create inference rules and obtain conclusions. In the treatment of data with uncertain values the system must enable the combination of qualitative evaluation with quantitative values of uncertainty. In this chapter we will present and discuss the main methodological elements in uncertainty treatment using Paraconsistent Logic. The methods presented here utilize the concepts and fundamentals of Paraconsistent Logic. They make a quantitative analysis through its representative lattice. The application and results are obtained based on the fundamentals of Paraconsistent Annotated Logic with annotation of two values (PAL2v).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
55
3.2 Uncertainty Treatment System for decision making using PAL2v In a Paraconsistent Analysis System for decision making the inputs μ and λ are values contained in the closed interval between 0 and 1, which belong to the set of Real numbers. These two values come from two or more information sources, which search for Favorable or contrary evidences in respect to the same Proposition P. Once originated from different sources these values may be equal, thus representing consistency, or be different and thus representing a contradiction. As they vary from 0.0 and 1.0, and come from different sources the Degrees of Evidence produce the Degree of Certainty DC and the Degree of Contradiction Dct. These, with their values between +1 and -1 qualify how much certainty the two evidence values offer, and how much inconsistency there is, respectively. The equations and analysis in the PAL2v representative lattice can provide these Degrees of Certainty and Contradiction. The values express how close or distant from the vertices they are. A typical system for uncertainty treatment with Paraconsistent Annotated Logic with annotation of two values- (PAL2v) may be seen in Fig 3.1.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 3.1 Typical system for Paraconsistent Analysis of two inputs.
3.2.1 Study on the representation of the PAL2v lattice for Uncertainty Treatment In the application of PAL2v, the Degrees of Evidence that feed the uncertainty treatment systems are valued information originated from several sources, or from different experts. Let’s consider two information sources that send the evidence signals concerning a certain Proposition P1 to Analysis and Decision-Making System. These signals are defined as: μ1 - Signal sent by information source 1 μ2 - Signal sent by information source 2 For the paraconsistent analysis one should join these two pieces of information considering them as annotation in a propositional formula. This transforms the information from both sources accompanied by Proposition P into a paraconsistent signal of the kind P(μ, λ), where: μ1 = μ Favorable Degree of Evidence to Proposition P.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
56
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
λ = Unfavorable Degree of Evidence to Proposition P calculated through the complement of Favorable Degree of Evidence from information source 2. λ = 1- μ 2 -------------------------Example 3.1 Suppose that information source 1 presents a signal to the system valued in 0.9 and information source 2 presents a signal valued in 0.4. Perform the representation of the annotation and of the paraconsistent signal, considering the information source 2 as Unfavorable Evidence. Resolution Consider μ1 = 0.9 and μ2 = 0.4 The complement of μ2 is calculated to obtain the value of Unfavorable Degree of Evidence: λ = 1- 0.4 = 0.6 The annotation is represented (μ, λ) as: (0.9, 0.6) Therefore, the Paraconsistent Signal is represented as: P(0.9, 0.6) -------------------------Example 3.2 For the paraconsistent signal in the previous example, determine the values of Degrees of Certainty and Degree of Contradiction. Resolution From equation (2.2) we determine the Degree of Certainty DC DC = 0.9 – 0.6 Therefore: DC = 0.3 From equation (2.3) we determine the Degree of Contradiction Dct DC = 0.9 + 0.6 -1 Therefore: Dct = 0.5 -------------------------Example 3.3 Suppose that information source 1 presents a signal to the system valued in 0.5 and information source 2 presents a signal valued in 0.5. a) Represent the annotation and the paraconsistent signal considering source 2 as Unfavorable evidence. b) For the obtained paraconsistent signal determine the values of the Degrees of Certainty and of Contradiction. Resolution a) Consider μ1 = 0.5 and μ2 = 0.5 We calculate the complement of μ2 to obtain the value of Unfavorable Degree of Evidence. λ = 1- 0.5 = 0.5 The annotation (μ, λ) is represented as: (0.5, 0.5) The Paraconsistent Signal is represented as: P(0.5, 0.5) b) From equation (2.2) we determine the Degree of Certainty DC DC = 0.5 – 0.5 Therefore: DC = 0.0 From equation (2.3) we determine the Degree of Contradiction Dct Dct = 0.5 + 0.5 -1 Therefore: Dct = 0.0 -------------------------We verify from the results obtained in example 3.3 that the Degrees of Evidence in the USCP having values equal to 0.5, will always result in a Degree of Certainty zero (DC=0) and a Degree of Contradiction also zero (Dct=0) in the representation of the PAL2v lattice. When the paraconsistent analysis results in Degree of Certainty zero, it means that the information sources do not have sufficient evidences to support an affirmation or refutation in respect to the proposition. Because of this, each information source sends Indefinite values to the analysis system valued in 0.5. This means that the experts, by not having enough information select with 50% of Favorable evidence and
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
57
50% of Unfavorable Evidence leading the system to a Degree of Certainty zero in respect to analyzed proposition. Dct
+1
T
λ (1, 1)
(0, 1)
t
F λ = 0.5
Dct = DC = 0
-1
(0, 0)
(1, 0)
+1 DC x
μ
μ = 0.5
⊥
-1
Figure 3.2 Null Degree of Certainty and of Contradiction obtained from the analysis of Indefinite Degrees of Evidences of values 0.5.
As a result of an intense verification, we note that when the paraconsistent analysis presents a low Degree of Certainty as a result this may mean two situations:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1- The sources are bringing low intensity Evidences to the analysis to affirm or negate the Proposition. In this case, the sources or experts that supply the evidences lack information to feed the analysis systems. For this reason they present information signals with indefinite values, therefore close to 0.5. In this way, an information source (Expert, sensor etc...) when it is not active, will present as value μ=0.5 and the result of the interpolation point between the values of DC and Dct in the lattice will be close to the origin of the two axes. This condition is seen in figure 3.3.
Dct T
+1
Dct
t
F -1
DC
⊥
+1 DC
-1
Figure 3.3 Representation of low intensity of the evidences with a low value of Degree of Contradiction.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
58
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
2- The Sources are bringing high intensity Inconsistency Evidences to the analysis. In this case the information sources are sending strong enough values for the system to evaluate, affirming or negating the proposition. However despite all the existing information for the analysis, there is such high Contradiction between the sources that limit the value of the Degree of Certainty. In this case, the interpolation point between the values of DC and Dct will lie on the boundary line of the lattice. From this point we verify that the Degree of Certainty DC will only advance to one of the maximum Certainty values, True or False, in case there is a reduction in the Degree of Contradiction value. This condition, for a Degree of Certainty value limited by a positive Degree of Contradiction indicating the existence of Inconsistency, is seen in figure 3.4. We verify that the same condition presented in figure 3.4 may happen for a Degree of Certainty limited by a negative Degree of Contradiction which indicates the existence of Indetermination in the analysis. Dct T
+1
Dct
t
F DC
-1
⊥
+1 DC
-1
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 3.4 Maximum Degree of Certainty limited by positive Degree of Contradiction.
3.2.2 The Interval of Certainty φ In the Paraconsistent Analysis System for decision making, all the information is considered for the treatment of uncertainties, whatever they are, incomplete, indefinite or Inconsistent. When the Degree of Certainty is low due to insufficient information, and not because of high Degree of Contradiction, the system has conditions to receive more information in the Degrees of Evidence. In this case the evidences must be better processed until the Degree of Certainty reaches a maximum value appropriate for a decision to be made. We know that after a paraconsistent analysis, two values will be found; one corresponds to the Degree of Certainty DC and the other corresponds to the Degree of Contradiction Dct. If the value of the Degree of Contradiction allows a variation in the Degree of Certainty we can affirm that: In the Degree of Certainty axis there is a Degree of Certainty of maximum value for the extreme condition of the vertex that considers a True logical state and another one of maximum value for the extreme condition that considers a False logical state.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
59
The maximum value that tends to the extreme condition of the True vertex is named Maximum Degree of Certainty tending to True DCmax t. The maximum value that tends to the extreme condition of the Falsehood vertex is named Maximum Degree of Certainty tending to False DCmax F . We may represent an interval of certainty values where the Degree of Certainty may vary without being limited by the Degree of Contradiction. This interval represented by φ may be calculated by: (3.1) φ = 1- |D ct| In figure 3.5 this condition may be visualized in the representation of the lattice. Dct T
+1
Dct
t
F -1
DC
DCmax F
+1 DC DCmax t
⊥
-1
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Fig 3.5 Representation of the Maximum Values of the Degrees of Certainty with constant Degree of Contradiction in the Lattice.
The Maximum Degree of Certainty tending toTrue will be the positive value of the Interval of Certainty itself, therefore: (3.2) D Cmax t = +φ The Maximum Degree of Certainty tending to False will be the negative value of the Interval of Certainty itself, therefore: (3.3) D Cmax F = - φ We verify that the Interval of Certainty Value does not change in the case the Degree of Certainty is being limited by a negative Degree of Contradiction, which indicates an Indetermination.
3.2.3
Representation of the Resultant Degree of Certainty
From these observations we can define a representation of a Resultant Degree of Certainty by applying concepts of PAL2v. The Resultant Degree of Certainty in the analyses of these pieces of information, represented by their Degrees of Evidences, will be an indication of how much the system may be better processed in its evidences to increase the Certainty in respect to Proposition P. The Degree of Certainty will then be represented by its calculated value DC accompanied by another value which will indicate the Interval of Certainty, whose symbol is φ. Therefore, the representation of the resulting signal obtained from the analysis will be:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
60
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
⎡D DCr = ⎢ C ⎣ϕ
(3.4)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
where: DCr = Resultant Degree of Certainty DC = Calculated Degree of Certainty obtained through: DC = μ - λ φ = Interval of Certainty, obtained through: φ = 1- |Dct| The Calculated Degree of Certainty DC informs, after the analysis of the values of the Degrees of Evidence presented as input values, how much Certainty the System attributed to the Proposition. The Interval of Certainty φ indicates how possible it is to introduce new evidences able to vary the Certainty in relation to the Propositions, considering the inconsistency level among information signals presented as input values. This indication represented by the Interval of Certainty φ is of maximum value so as to affirm or to refute the Proposition. We consider, therefore, that the value of the Interval of Certainty informs what maximum negative Degree of Certainty can be obtained by reducing the affirmation (Favorable evidence) and increasing the negation (Unfavorable evidence). In this case, the Degree gets closer to the state considered False. In the same way, the value of the Interval of Certainty indicates what maximum positive Degree of Certainty can be obtained by increasing the affirmation (Favorable evidence) and reducing the negation (Unfavorable evidence); therefore getting close to the state True. In these cases we consider that the variations indicated by the Interval of Certainty are allowed without changes in the value of the Degree of Contradiction, which will remain constant. In the representation of the Interval of Certainty φ a positive (+) or negative (-) signal is added to its symbol. This will indicate if its absolute value was originated from a positive Degree of Contradiction tending to Inconsistent or from a negative Degree of Contradiction tending to Indeterminate. Hence, a representation of the output result after paraconsistent analysis comes as: ⎡ DC DCr = ⎢ ⎢⎣ϕ( ± )
(3.5)
where: DCr = Resultant Degree of Certainty DC = Calculated Degree of Certainty, obtained through: DC = μ - λ φ = Signaled Interval of Certainty, obtained through: φ = 1- |D ct| If D ct > 0 φ = φ (+) If D ct < 0 φ = φ (-) -------------------------Example 3.4 Suppose that a Paraconsistent Analysis System is receiving information from two sources where the values are: Information source 1 μ1 = 0.85 Information source 2 μ 2 = 0.45 a) Determine the Degrees of Certainty DC and of Contradiction Dct of the analysis. b) Determine the Interval of Certainty φ of the Analysis. c) Determine the value of maximum Degree of Certainty tending to True DCmax t, and the value of maximum Degree of Certainty tending to False DCmax F. d) Present the result of this analysis through the Resultant Degree of Certainty and of its signaled Interval of Certainty φ (±).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
61
Resolution: a) Consider μ1 = 0.85 and μ2 = 0.45 We calculate the complement of μ2 to obtain the value of Unfavorable Degree of Evidence. λ = 1- 0.45 = 0.55 We represent the annotation (μ, λ) as: (0.85, 0.55) The Paraconsistent signal is represented as follows: P(0.85, 0.55) From equation (2.2) we determine the Degree of Certainty DC DC = 0.85 – 0.55 → DC = 0.3 From equation (2.3) we determine the Degree of Contradiction Dct Dct = 0.85 + 0.55 -1 → Dct = 0.4 b) From equation (3.1) we calculate the Interval of Certainty: φ = 1- | 0.4 | → φ = 0.6 c) According to equations (3.2) and (3.3) and DCmax F = -0.6 D Cmax t = 0.6 d) According to (3.5), since the Degree of Contradiction is positive D ct > 0 we will have φ = φ (+) Therefore: ⎡ 0.3 DCr = ⎢ ⎣ 0.6( + ) ------------------------
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
3.2.4 The Estimated Degree of Certainty We have seen that the result of an analysis with PAL2v presents the Calculated Degree of Certainty DC and the Interval of Certainty φ, which are obtained by equating the valued signals representation evidences in respect to what will be analyzed. With these two values, it is possible to make decisions, because after the paraconsistent analysis is executed, the value of the Interval of Certainty φ informs the System how much the evidences should be reduced or increased to obtain maximum Certainty. From the value of the Interval of Certainty the Evidence signals can be estimated, whether Favorable or Unfavorable and which value must be varied to achieve the desired Degree of Certainty. We know that the Degree of Contradiction is obtained through (2.3): D ct = (μ + λ) – 1 And the Interval of Certainty is obtained in accordance to equation (3.1): φ = 1- |D ct| From these equations we verify that when there is a lot of inconsistency among the information, the value of the Degree of Contradiction is high and the Interval of Certainty decreases. For a Degree of Contradiction equal to 1 the value of φ is null, and in these conditions the value of the Degree of Certainty is zero. This means that the contradiction between the evidences is so high that nothing can be affirmed or refuted in respect to the analyzed Proposition. Therefore, with these considerations it is possible to obtain the value of the Estimated Degree of Certainty in an analysis of the lattice as follows:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
62
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
Consider a paraconsistent analysis where the calculus of the Degrees of Certainty DC and Dct, in the lattice will result in an internal point (D C, Dct), according to figure 3.6. We draw a line parallel to the values of Certainty axis which goes through the interpolation point (DC, Dct), and, in the vertical axis it goes through the point of the value DCt, and meets point B in the limit line of the figure. Next, line r, which starts in the internal point of the vertex that represents maximum value of Certainty up to point B, is transferred to the internal interpolation point (D C, Dct) meeting a point in the values of Certainty axis. We call this the Estimated Degree of Certainty value D Cest. Dct T
+1 (DC, Dct) B
Dct
F -1
DC
⊥
DCest
t
DC
-1
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Fig 3.6 Representation of the Estimated Degree of Certainty in the Lattice for a condition of null Degree of Contradiction.
By overlapping the axes of the values of Favorable μ and Unfavorable λ Degrees of Evidence in the lattice, we verify in figure 3.6 that the point indicated as DCest. is the Degree of Certainty value we obtain with the null Degree of Contradiction by maintaining the same value of the Favorable Degree of Evidence μ. In figure 3.7 we may calculate in line r the distance between point B and the maximum value of Certainty t represented in the vertex of the lattice by: d 2 = (1 - ϕ ) 2 + Dct 2 d = (1 - ϕ )2 + Dct 2 (3.6) In figure 3.7 we verify that the value of the Added Degree of Certainty (DCAdd) by the variation of Unfavorable evidences is calculated by: DCAdd =
d 2 + Dct 2
(DCAdd ) 2 = d 2 + Dct 2
or (DCAdd ) 2 = (1 - ϕ ) 2 + Dct 2 − Dct 2
This will result in: DCAdd = 1 - ϕ
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
(3.7)
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
63
Dct T
+1
(DC, Dct)
B
Dct
d
λ F -1
DC
DCest
r
μ t
DC
μi λi λf
⊥
-1
Figure 3.7 Representation of the values of Evidence axes in the PAL2v Lattice with a variation of λ to obtain the Estimated Degree of Certainty.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
The value of the Estimated Degree of Certainty is found by: DCest= DC + DCAdd Therefore: D Cest= DC + (1-φ)
(3.8)
We verify in this case that the Calculated Degree of Certainty D C is positive. Since the Degree of Contradiction Dct is also positive, tending to Inconsistent, the Interval of Certainty will be represented by φ(+). According to what was seen in the figure of the lattice, the values of DC and Dct once positive, the variation to obtain the value of the Estimated Degree of Certainty DCest, from the Calculated Degree of Certainty DC, is done with the values of Unfavorable Degrees of Evidence: ∆λ= λi - λf Verifying the figure of the lattice we have: λ f = λi - d Where the value of distance d is found through equation (3.6): d = (1 - ϕ ) 2 + Dct 2 Similarly we verify that: d=
(Dct ) 2 + Dct 2
d = Dct 2 In the analysis done in the previous chapter, following the interpretation of the Favorable and Unfavorable Degrees of Evidence in the Unitary Square of the Cartesian Plane- USPC for the PAL2v lattice a redimentioning was carried out by multiplying
their values by 2 . In this way, to compensate for the redimentioning the value of distance d in relation to these values must be divided by 2 , resulting in: d = Dct In this way, the calculus of the Unfavorable Degree of Evidence final value will be:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
64
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
λ f = λi - Dct
(3.9)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
This value may also be obtained from equation (2.2). In this case, we have: D Cest = μi - λf . Through the value of the Estimated Degree of Certainty we may obtain the Unfavorable Degree of Evidence final value by: (3.10) λf = μi - DCest -------------------------Example 3.5 Suppose that two information sources send the following values: μ1 = 0.75 Degree of Evidence supplied by Source 1 μ2 = 0.3 Degree of Evidence supplied by Source 2 a) Determine the Degrees of Certainty DC and of Contradiction Dct in the analysis. b) Calculate the value of the Estimated Degree of Certainty DCest that is, the maximum value that can be obtained with the reduction of the Degree of Contradiction to zero. c) Determine what evidence will be varied to obtain the Estimated Degree of Certainty DCest in the above item. d) Calculate the value of the Degree of Evidence to obtain the Estimated Degree of Certainty. Resolution: a) Consider μ1 = 0.75 and μ2 = 0.35 We calculate a complement of μ2 to obtain the value of the Unfavorable Degree of Evidence. λ = 1- 0.35 = 0.65 We represent the annotation (μ, λ) as: (0.75, 0.65) The Paraconsistent signal is represented as follows: P(0.75, 0.65) From equation (2.2) we determine the Degree of Certainty DC DC = 0.75 – 0.65 → DC = 0.1 From equation (2.3) we determine the Degree of Contradiction Dct Dct = 0.75 + 0.65 - 1 → Dct = 0.4 b) In these conditions we have a positive Degree of Certainty DC and a Degree of Contradiction Dct also positive, therefore from equation (3.1) we calculate the Interval of Certainty: φ = 1- | 0.4 | → φ = 0.6 Through equation (3.8) we calculate the Estimated Degree of Certainty: D Cest = 0.1 + (1 - 0.6) D Cest = 0.5 c) Since the Degree of Contradiction is positive the Interval of Certainty will be signaled positively φ (+). We verify in the representation of the lattice that we must reduce the Unfavorable Evidence value λ to have a reduction of the Degree of Contradiction. d) From equation (3.9) we calculate the Unfavorable Degree of Evidence final value through: λ f = 0.65 – 0.4 λ f = 0.25 The same value is found through equation (3.10): λ f = 0.75 – 0.5 λ f = 0.25 --------------------------
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
65
When the Degree of Certainty DC has a positive signal and the Degree of Contradiction Dct has a negative signal, the internal interpolation point (DC, -Dct) will be located on the right of the Contradiction axis and below the Certainty axis. This condition is presented in figure 3.8. Dct T
+1
μ λ F -1
DCest d r
DC -Dct
B
t
DC μf
μi λi
⊥
-1
(DC, -Dct)
Figure 3.8 Representation of the Evidence Values axes in the PAL2v lattice to estimate values with positive DC and negative Dct.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
The value of the Estimated Degree of Certainty is found by: D Cest = DC + DCAdd Therefore, in the same equation (3.8): DCest = DC + (1-φ) In this case, where DC is positive and Dct is negative tending to Indetermination, the Interval of Certainty will be represented by φ(-). For a null Degree of Contradiction the Favorable Degree of Evidence μ is the one that should be varied, therefore: ∆μ = μi - μf Verifying the figure we have: μf = μi + d μ f = μi + |Dct |
(3.11)
It may also be calculated from equation (2.2) : DCest = μf - λi Where through the value of Estimated Degree of Certainty we obtain the value of final Favorable Degree of Evidence by: (3.12) μ f = λi + DCest -------------------------Example 3.6 Suppose that two information sources send the following values: μ1 = 0.8 Degree of Evidence supplied by the Source 1 μ2 = 0.9 Degree of Evidence supplied by the Source 2 a) Determine the Degrees of Certainty DC and of Contradiction Dct in the analysis. b) Calculate the value of the Estimated Degree of Certainty DCest, that is, the maximum value that can be obtained with the reduction of the Degree of Contradiction to zero.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
66
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
c) Determine what Evidence will be varied to obtain the Estimated Degree of Certainty DCest in the above item. d) Calculate the value of the Degree of Evidence to obtain the Estimated Degree of Certainty. Resolution: a) Consider μ1 = 0.8 and μ2 = 0.9 We calculate a complement of μ2 to obtain the value of Unfavorable Degree of Evidence: λ = 1- 0.9 = 0.1 We represent the annotation (μ, λ) as: (0.8, 0.1) The Paraconsistent Signal is represented as follows: P(0.8, 0.1) From equation (2.2) we determine the Degree of Certainty DC DC = 0.8 – 0.1 → DC = 0.7 From equation (2.3) we determine the Degree of Contradiction Dct Dct = 0.8 + 0.1 - 1 → Dct = - 0.1 b) In these conditions we have a positive Degree of Certainty DC and a negative Degree of Contradiction Dct; therefore from equation (3.1) we calculate the Interval of Certainty: φ = 1- | 0.1 | → φ = 0.9 Through equation (3.8) we calculate the Estimated Degree of Certainty: D Cest = 0.7 + (1 – 0.9) D Cest = 0.8 c) Since the Degree of Contradiction is negative the Interval of Certainty will have a negative signal φ (-). We verify in the representation of the lattice that to have a reduction of the Degree of Contradiction we must increase the value of Favorable Evidence μ. d) With Dct = - 0.1 we have: |Dct| = 0.1; therefore through equation (3.11) we calculate the Favorable Degree of Evidence final value by: μf = 0.8 + 0.1 μ f = 0.9 The same value is found through equation (3.12): μ f = 0.1 + 0.8 μ f = 0.9 --------------------------When the Degree of Certainty DC has a negative signal and the Degree of Contradiction Dct has a positive signal the internal interpolation point (-D C, Dct) will be located on the left of the Contradiction axis above the Certainty axis. This condition is presented in figure 3.9. The value of the Degree of Certainty (DCAdd) added by the variations of the Unfavorable Evidences will bear a negative signal: DCAdd = -
(
d 2 + Dct 2
)
or: DCAdd = ϕ - 1
The value of Estimated Degree of Certainty is found by: D Cest = DC + DCAdd Therefore: D Cest = DC + (φ - 1)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
(3.13)
(3.14)
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
67
Dct T
+1
(-DC, Dct) Dct
B λ
r
d
μ
F -1 -DCest -DC
t
DC
λi μi ⊥
-1
μf
Figure 3.9 Representation of the values of Evidence axes in the PAL2v lattice to estimate values with negative DC and positive Dct.
Since the Degree of Contradiction is positive the Interval of Certainty will be represented by φ(+). In this case, for a null Degree of Contradiction the Favorable Degree of Evidence μ is the one that must be varied. Therefore: ∆μ = μf – μi By verifying the figure we have: μf = μi - d
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ f = μi - Dct
(3.15)
This may also be calculated from equation (2.2) D Cest = μf - λi Through the value of the Estimated Degree of Certainty we can obtain the Favorable Degree of Evidence final value by: (3.16) μf = DCest + λi -------------------------Example 3.7 Suppose that two information sources send the following values: μ1 = 0.4 Degree of Evidence supplied by Source 1 μ2 = 0.3 Degree of Evidence supplied by Source 2 a) Determine the Degrees of Certainty DC and of Contradiction Dct of the analysis. b) Calculate the value of the Estimated Degree of Certainty DCest., that is, the maximum value that may be obtained with the reduction of the Degree of Contradiction to zero. c) Determine what evidence must be varied to obtain the Estimated Degree of Certainty DCest in the above item. d) Calculate the value of the Degree of Evidence to obtain the Estimated Degree of Certainty. Resolution: a) Consider μ1 = 0.4 and μ2 = 0.3 We calculate a complement of μ2 to obtain the value of Unfavorable Degree of Evidence: λ = 1- 0.3 = 0.7 We represent the annotation (μ, λ) as: (0.4, 0.7) The Paraconsistent Signal is represented as follows: P(0.4, 0.7) From equation (2.2) we determine the Degree of Certainty DC
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
68
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
DC = 0.4 – 0.7 → DC = - 0.3 From equation (2.3) we determine the Degree of Contradiction Dct Dct = 0.4 + 0.7 - 1 → Dct = 0.1 b) In these conditions we have a negative Degree of Certainty DC and a positive Degree of Contradiction Dct therefore from equation (3.1) we calculate the Interval of Certainty: φ = 1- | 0.1 | → φ = 0.9 Through equation (3.14) we calculate the Estimated Degree of Certainty: D Cest = -0.3 + (0.9 - 1) D Cest = -0.4 c) Since the Degree of Contradiction is positive, the Interval of Certainty will bear a positive signal φ (+). We verify in the representation of the lattice that we must increase the value of the Unfavorable Evidence λ to have a reduction of the Degree of Contradiction. d) From equation (3.15) we calculate the Favorable Degree of Evidence final value by: μ f = 0.4 - 0.1 μ f = 0.3 It can also be calculated from equation (3.16) μ f = - 0.4 + 0.7 μ f = 0.3 --------------------------When the Degree of Certainty DC and of Contradiction Dct have positive signals the internal interpolation point (-DC, -Dct) will be located on the left of the Contradiction axis and below the Certainty axis. This condition is presented in figure 3.10. Dct
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
T
+1
μ
λ t
F -1 -DCest -DC r d -Dct λf B
DC
λi (-DC, -Dct) ⊥
μi
-1
Figure 3.10 Representation of the values of Evidence axes in the PAL2v lattice to estimate values with negative DC and negative Dct.
The Degree of Certainty value (DCAdd) added by the variation of the Unfavorable Evidences will have a negative signal: DCAdd = -
(
d 2 + Dct 2
)
Or by equation (3.13): DCAdd = ϕ - 1 The value of the Estimated Degree of Certainty is found by:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
69
DCest = DC + DCAdd Therefore from equation (3.14): DCest = DC + (φ -1) Since the value of the Degree of Contradiction is negative the Interval of Certainty will be represented by φ(-). In this case, when the Degree of Contradiction is null the Unfavorable Degree of Evidence λ is the one that must be varied. Therefore: ∆λ= λf – λi By verifying the figure we have: λf = λi + d
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
λ f = λi + |Dct|
(3.17)
It can also be calculated from equation (2.2): D Cest = μi - λf Through the value of Estimated Degree of Certainty we may obtain the Unfavorable Degree of Evidence final value by: (3.18) λ f = μi - DCest -------------------------Example 3.8 Suppose that two information sources send the following values: μ1 = 0.25 Degree of Evidence supplied by Source 1 μ2 = 0.35 Degree of Evidence supplied by Source 2 a) Determine the Degrees of Certainty DC and of Contradiction Dct of the analysis. b) Calculate the value of the Estimated Degree of Certainty DCest, that is the maximum value that can be obtained with the reduction of the Degree of Contradiction to zero. c) Determine what evidence must be varied to obtain the Estimated Degree of Certainty DCest in the above item. d) Calculate the value of the Degree of Evidence to obtain the Estimated Degree of Certainty DCest. Resolution: a) Consider μ1 = 0.25 and μ2 = 0.35 We calculate a complement of μ2 to obtain the value of Unfavorable Degree of Evidence. λ = 1- 0.35 = 0.65 We represent the annotation (μ, λ) as: (0.25, 0.65) The Paraconsistent Signal is represented as follows: P(0.25, 0.65) From equation (2.2) we determine the Degree of Certainty DC DC = 0.25 – 0.65 → DC = - 0.4 From equation (2.3) we determine the Degree of Contradiction Dct Dct = 0.25 + 0.65 - 1 → Dct = -0.1 b) In these conditions we have a negative Degree of Certainty DC and also a negative Degree of Contradiction Dct, therefore from equation (3.1) we calculate the Interval of Certainty: φ = 1 - | 0.1 | → φ = 0.9 Through equation (3.14) we calculate the Estimated Degree of Certainty: D Cest = - 0.4 + (0.9 - 1) D Cest = -0.5 c) Since the Degree of Contradiction is negative, the Interval of Certainty will bear a negative signal φ (-). We verify from the representation of the lattice that we must increase the Unfavorable Evidence value λ to have a reduction of the Degree of Contradiction. d) From equation (3.15) we calculate the final Favorable Degree of Evidence by: λ f = 0.65 + 0.1 λ f = 0.75
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
70
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
It can also be calculated from equation (3.18) λ f = 0.25 – (- 0.5) λ f = 0.75 ---------------------------
3.2.5 Input data Variations in relation to the Estimated Degree of Certainty The analysis of the representative figures of the PAL2v lattice shows that in uncertainty treatment systems it is possible to find the Estimated values of the Degrees of Certainty with information of null contradiction. We verify that by approximating their values to the extreme states of Certainty one may establish the variations and so estimate the necessary increase or reduction of the Degrees of Evidence from the information that get to the System. For a decision to be made there must be conditions to distinguish what kind of evidences, whether Favorable or Unfavorable, must be varied to reduce contradiction and thus obtain the Estimated Degree of Certainty. Based on the previous studies we may then establish some criteria which will be utilized on the algorithm. To find the Estimated value of the Degree of Certainty to obtain a null Degree of Contradiction we have: For DC 0 we calculate through: DCest = DC + (1 - φ)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
To obtain the final values of input evidences constant, the system must act as follows: If DC > 0 For φ = φ (+) Else If DC > 0 For φ = φ (-) Else
while maintaining one of them Reduce λ Reduce μ Increase μ Increase λ
maintain μ maintain λ maintain λ maintain μ
3.2.6 The Real Degree of Certainty A decision system, able to analyze data originated from uncertain knowledge will have greater robustness when, at the end of the analysis it presents a result that represents the value of pure Certainty, that is, not contaminated by uncertainty effects. Therefore, the final value must be subtracted from the value attributed to the effect of the influence of inconsistencies originated from conflicting information. Thus an analysis of the representative lattice is carried out to obtain a value of the Real Degree of Certainty DCR, after the treatment of information originated from uncertain knowledge data-base. The value of Real Degree of Certainty represents the Degree of Certainty free from Contradiction effect. For this, in the analysis process the value relative to the effect of the inconsistencies in information is subtracted. In a paraconsistent analysis we consider that the calculus of the Degree of Certainty DC through (2.2) and the Degree of Contradiction Dct, through (2.3) will result in positive values and will be interpolated in the lattice in a internal point (D C, Dct) according to figure 3.11.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
Dct T
+1
71
(DC, Dct )
Dct F -1
d DC
0
⊥
t +1 DC
-1
Figure 3.11. Interpolation point (DC, Dct) and distance d.
Distance d of the line segment in the figure that goes from the point of maximum Degree of Certainty t, represented in the right-hand side vertex of the lattice up to the interpolation point (DC, Dct), is calculated through: d = (1− | DC |) 2 + Dct 2 (3.19) According to figure 3.12 we verify that by plotting distance d in the axis of Certainty values we obtain a point whose value is considered as the value of the Real Degree of Certainty DCR.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Dct T
+1
(DC, Dct )
Dct F -1
d 0 DCR DC
⊥
t +1 DC
-1
Figure 3.12. Determination of the Real Degree of Certainty DCR Value in the PAL2v lattice.
If the Calculated Degree of Certainty DC results in a negative value, distance d will be obtained from the Certainty point F, represented on the left-hand side vertex of
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
72
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
the lattice up to interpolation point (-DC, Dct). The interpolation point in these conditions is represented in figure 3.13.
(-DC, Dct)
Dct T
+1
Dct F -1
d
t +1 -DC - DCR 0
⊥
DC
-1
Figure 3.13. Determination of the Real Degree of Certainty DCR in the PAL2v lattice when DC is negative and when Dct is positive.
The negative values of Dct do not change the way we obtain DCR. Therefore, the value of Real Degree of Certainty DCR is obtained from the determination of the distance d according to the condition shown below: For DC > 0 DCR = 1 − d or:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
DCR = 1 − (1− | DC |) 2 + Dct 2
For DC < 0 or:
(3.20)
DCR = d − 1 DCR = (1− | DC |) 2 + Dct 2 − 1
(3.21)
--------------------------Example 3.9 Consider that two information sources send the following values: μ1 = 0.86 Degree of Evidence supplied by Source 1 μ2 = 0.72 Degree of Evidence supplied by Source 2 a) Determine the Degrees of Certainty DC and of Contradiction Dct of the analysis. b) Calculate the Real Degree of Certainty DCR. Resolution: a) Consider μ1 = 0.86 and μ2 = 0.72 We calculate a complement of μ2 to obtain the Unfavorable Degree of Evidence: λ = 1- 0.72 = 0.28 We represent the annotation (μ, λ) as: (0.86, 0.28) The Paraconsistent Signal is represented as follows: P(0.86, 0.28)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
73
From equation (2.2) we determine the Degree of Certainty DC DC = 0.86 – 0.28 → DC = 0.58 From equation (2.3) we determine the Degree of Contradiction Dct Dct = 0.86 + 0.28 - 1 → Dct = 0.14 b) Calculate distance d from equation 3.19: d = (1− | 0.58 |)2 + 0.142 d = 0.4427188 Since the Degree of Certainty DC is positive we determine the Real Degree of Certainty from equation (3.20): DCR = 1 − 0.4427188 DCR = 0.5572812
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
-----------------------------Example 3.10 Suppose that two information sources send the following values: μ1 = 0.18 Degree of Evidence supplied by Source 1 μ2 = 0.36 Degree of Evidence supplied by Source 2 a) Determine the Degrees of Certainty DC and of Contradiction Dct of the analysis. b) Calculate the Estimated Real value of the Real Degree of Certainty D CR. Resolution: a) Consider μ1 = 0.18 and μ2 = 0.36 We calculate a complement of μ2 to obtain the Unfavorable Degree of Evidence: λ = 1- 0.36 = 0.64 We represent the annotation (μ, λ) as: (0.18, 0.64) The Paraconsistent Signal is represented as follows: P(0.18, 0.64) From equation (2.2) we determine the Degree of Certainty DC DC = 0.18 – 0.64 → DC = - 0.46 From equation (2.3) we determine the Degree of Contradiction DCt Dct = 0.18 + 0.64 - 1 → Dct = - 0.18 b) Calculate distance D from equation 3.19: d = (1− | 0.46 |)2 + 0.182 d = 0.56920997
Since the Degree of Certainty DC is negative we determine the Real Degree of Certainty from equation (3.21): DCR = 0.56920997 − 1 DCR = −0.43079003 -------------------------------
3.2.7 The influence of Contradiction on the Real Degree of Certainty In a paraconsistent analysis system once the Contradiction between the input evidence values is strengthened there will be a reduction of the Real Degree of Certainty DCR.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
74
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
The Contradiction is represented by the value of the Degree of Contradiction Dct. Therefore, for a high Degree of Contradiction value the Real Degree of Certainty will get close to zero.
3.2.7.1 Test The action of the Degree of Contradiction Dct over the value of Real Degree of Certainty DCR may be seen in the lattice of figure 3.14. In this test, a constant value of the Calculated Degree of Certainty DC is maintained. And the value of the Degree of Contradiction is increased gradually. We will verify that there will be a gradual reduction on the Real Degree of Certainty.
Dct T
+1
Dct
DC = 0.25 F -1
d 0 DCR DC
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
⊥
Dct
t +1 DC
0.0 0.15 0.25 0.35 0.50 0.65 0.75
DCR 0.25 0.2351 0.209 0.1723 0.0986 0.007528 0.0
∆DC 0.0 0.01485 0.0456 0.07764 0.15138 0.242471 0.25
-1
Figure 3.14. The influence of the Degree of Contradiction over the value of Real Degree of Certainty with constant Degree of Certainty of 0.25.
In the test of figure 3.15 a constant value of the Degree of Contradiction D ct is maintained. The value of the Degree of Certainty DC is increased gradually. We verify that the effect of Contradiction over the Real Degree of Certainty D CR is increased as the value of DC gets closer to the maximum value of Certainty in the vertex of the lattice. The limit influence of Contradiction is the Interval of Certainty φ because from this value the approximation will only happen if the Contradiction is reduced.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
Dct T
75
+1
Dct = 0.25 F -1
Dct d 0
⊥
DCR DC
DC
t +1 DC
0.15 0.25 0.35 0.50 0.65 0.75
DCR 0.114 0.209 0.305 0.441 0.5698 0.64644
∆DC 0.036 0.040 0.0464 0.059 0.0801 0.1355
-1
Figure 3.15. Influence of the Degree of Contradiction over the value of Real Degree of Certainty with a fixed Degree of Contradiction of 0.25.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
3.2.7.2. Invalidation of the Real Degree of Certainty According to what was seen from equation (3.1): φ = 1 - |Dct| the value of the interval of Certainty is calculated. We can see from the equation that when the value of the Degree of Contradiction Dct is 0.75 the value of the Interval of Certainty φ will be 0.25. According to the definition of the Interval of Certainty, in this condition the value of φ informs that the maximum calculated Degree of Certainty DC will be ±0.25. The interpolation point between the Degrees of Certainty and of Contradiction will be then represented over the limit line in the lattice. In the lattice of figure 3.16 the indicated interpolation point (DC, Dct) is marked by letter B. Using equation (3.19) with these values of DC and Dct we can calculate the distance to the extreme vertex on the right t by: d = (1− | 0.25 |)2 + 0.752 d =1 Since the Degree of Certainty is positive, the value of the Real Degree of Certainty will be calculated according to equation (3.20). Therefore, as d=1: D CR = (1 - d) = 0 D CR = 0 We verify that under these conditions the projection of distance d in the axis of Certainty values will result in null Real Degree of Certainty.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
76
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
T Dct
B
(DC, Dct ) d=1
F -1
t +1 0 DCR DC
DC
⊥ -1 Figure 3.16. Determination of the Degree of Certainty of Real value DCR in the PAL2v lattice when the Degree of Contradiction Dct is 0.75.
In the PAL2v representative figure where the Degree of Contradiction may be of negative or positive value, the result of Real Degree of Certainty will be null. This may be seen in figure 3.17 where the Degree of Certainty is positive.
T Dct
B
(DC, Dct )
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
d=1 F -1
t +1 0 DCR DC
-Dct
DC
(DC, -Dct) B
⊥ -1 Figure 3.17 Degree of Certainty of null Resultant Real value DCR in the PAL2v lattice where Dct has a positive value of 0.75 and 0.75 negative with DC positive.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
77
As verified in the previous figure, also for a negative Degree of Certainty if the Degree of Contradiction is above 0.75 the Real Degree of Certainty will be null. This may be seen in figure 3.18.
T (-DC, Dct )
B Dct d=1
F -1
t +1 -DC -DCR 0
(-DC, -Dct)
DC
-Dct B ⊥ -1
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 3.18 Degree of Certainty of Resultant null Real value DCR in the PAL2v lattice when DCt has a positive 0.75 and negative 0.75, with a negative DC.
According to what is seen in the previous figures, in any quadrant in the lattice, when the Degree of Contradiction Dct is equal to 0.75 the Real Degree of Certainty DCR will be zero. Therefore, we may conclude that: “For a value of an Interval of Certainty φ equal to or smaller than 0.25 the Resultant Degree of Certainty DCR in the paraconsistent analysis will always be null”. 3.2.8 Representation of the Real Resultant Interval of Certainty After determining the Real Degree of Certainty, the answer of a Paraconsistent analysis should be to present a new value of the Interval of Certainty. As the Real Degree of Certainty DCR is the value of the Calculated Degree of Certainty after the effect of Contradiction is subtracted, the representation of φ should be 1. However, in the representation the same value of φ is maintained with its signal so that the system can recover both the value of the Calculated Degree of Certainty and the value of the Degree of Contradiction. Thus, the algorithm of the analysis system will have enough data to establish what evidences may be varied, in case it is necessary. Therefore, the output signal of an uncertainty treatment paraconsistent system will present a result after receiving its input evidence values:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
78
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
⎡ DCR DCrr = ⎢ ⎢⎣ϕ( ± ) Where: DCrr = Degree of Resultant Real Certainty DCR = Calculated Real Degree of Certainty through equations (3.20) and (3.21):
DCR = 1 − (1− | DC |) 2 + Dct 2
If DC > 0
DCR = (1− | DC |) 2 + Dct 2 − 1
If
DC < 0
φ(±) = Signaled Interval of Certainty, obtained by: φ = 1- |Dct| where: φ = φ (+) If Dct > 0 φ = φ (-) If Dct < 0 3.2.9 Recovering the values of Degrees of Certainty and Contradiction According to what was seen before, a paraconsistent analysis system presents at the output the value of Real Degree of Certainty DCR accompanied by the Signaled Interval of Certainty φ(±). From these two values the Calculated Degree of Certainty DC and of Contradiction Dct may be recovered and also determine what Degree of Evidence must be varied so that the reduction of Contradiction may happen. The proceeding for the recovery of these values is done as follows: Consider a paraconsistent analysis system has Input Degrees of Evidences μ and λ and presents an output signal of a Degree of Resultant Real Certainty represented by the values: DCR = Real Degree of Certainty φ(±) = Signaled Interval of Certainty
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Dct T
+1
(DC, Dct )
Dct
B
F -1
d 0 DCR DC
⊥
t +1 DC
-1
Figure 3.19 Determination of the Degree of Certainty of Real Resultant value - DCR in the PAL2v lattice.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
79
Initially, from equation (3.1) we use the value of Signaled Interval of Certainty to recover the value of the Degree of Contradiction by: D ct = 1 - φ(±)
(3.22)
In the Contradiction axis of the PAL2v representative lattice, the Recovered Degree of Contradiction follows the signals of the Interval of Certainty. If the signal of φ is positive φ(+) the Degree of Contradiction is above the Certainty axis, therefore values of the interval are positive. Considering Dct > 0 the values indicate a tendency to Inconsistency. If the signal of φ is negative φ(-) the Degree of Contradiction is below the Certainty axes, therefore there values will be negative. Considering Dct < 0 the values indicate a tendency to Indetermination. With the value of the Recovered Degree of Contradiction Dct and the value of the Real Degree of Certainty supplied by the system output we can calculate distance d by: d = Dct 2 +(1 − DC ) 2
(3.23)
From equation (3.20) for positive Degrees of Certainty we have the value of the Calculated Real Degree of Certainty through: DCR = 1 − d From where distance d may be obtained through: d = 1 − DCR
(3.24)
Making equation (3.23) equal to (3.24):
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Dct 2 +(1 − DC ) 2 = 1 − DCR D ct2 + (1- DC)2 = (1 - DCR )2 This way, the value of the Degree of Certainty DC is recovered by: DC = 1 − (1 − DCR ) 2 − Dct 2
If DCR > 0
(3.25)
From equation (3.21) for negative Degrees of Certainty we have: DCR = d − 1 From where distance d may be obtained through: d = 1 + DCR
Making equation (3.23) equal to (3.26): Dct 2 +(1 − DC ) 2 = 1 + DCR D ct2 + (1- DC )2 = (DCR + 1 )2 This way, the value of the Degree of Certainty DC is recovered by:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
(3.26)
80
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
DC = (DCR + 1) 2 − Dct 2 − 1 ----------------------------
If DCR < 0
(3.27)
Example 3.11 A paraconsistent analysis System presents an output signal of Real Resultant Degree of Certainty represented by the following values: DCR = Real Degree of Certainty equal to 0.6 φ(±) = Signaled Interval of Certainty equal to 0.8 with positive signal. Determine the Degree of Certainty DC and of Contradiction Dct of the System for these conditions that supply such output values. Resolution Equation (3.22) is used to determine the Degree of Contradiction. D ct = 1 – 0.8 Dct = 0.2 Since the value of the Degree of Certainty is positive, equation (3.25) is used to determine the Degree of Certainty: DC = 1 − (1 − 0.6)2 − 0.22 D C = 1 - 0.346410161514
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
DC = 0.6535898384 ---------------------------Example 3.12 A paraconsistent analysis system presents an output signal of Degree of Resultant Real Certainty represented by the values: DCR = Real Degree of Certainty equal to - 0.68 φ(±) = Signaled Interval of Certainty equal to 0,82 with a positive signal. Determine the Real Degree of Certainty DC and of Degree of Contradiction Dct of the system for these conditions that supply such output values. Resolution Equation (3.22) is utilized to determine the Degree of Contradiction D ct = 1 – 0.82 Dct = 0.18 Since the values of the Degree of Certainty is positive, equation (3.25) is utilized to determine the Degree of Certainty DC = (−0.68 + 1)2 − 0.182 − 1 DC = 0.2645751311 - 1 DC = - 0.735424868894
---------------------------With the recovered values of the Degrees of Certainty and Contradiction we may find the input values of the Degrees of Evidence that generate the output values. Therefore, equation 2.4 is used to obtain the input values of Evidences from the Real Degree of Certainty and the Signaled interval: 1 1 1 1 1 1 F( μ , λ ) = DC + Dct + , - DC + Dct + 2 2 2 2 2 2 Where the Favorable Degree of Evidence will be calculated by: D + Dct + 1 μ= C 2
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
(3.28)
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
81
And the Unfavorable Degree of Evidence will be calculated by: λ=
- DC + Dct + 1 2
(3.29)
---------------------------Example 3.13 A paraconsistent analysis system presents an output signal of Degree of Resultant Real Certainty represented by the values: DCR = Real Degree of Certainty equal to 0.68 φ(±) = Signaled Interval of Certainty equal to 0.87 with positive sign Determine the Favorable and Unfavorable Degrees of Evidence that generated these output values. Resolution We determine the Degree of Contradiction through equation (3.22): D ct = 1 – 0.87 D ct = 0.13 Equation (3.27) is utilized to determine the Degree of Certainty DC = 1 − (1 − 0.68)2 − 0.132 D C = 1 - 0.29240383034
DC = 0.7075961969
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
From equation (3.28) we calculate the Favorable Degree of Evidence by: 0.75961696 + 0.13 + 1 μ= 2 μ = 0.9187980 From equation (3.29) we calculate the Unfavorable Degree of Evidence by: - 0.75961696 + 0.13 + 1 λ= 2 λ = 0.2112019151 ---------------------------Through the recovering of the values we then identify the evidence at the input, if Favorable or Unfavorable must be strengthened or weakened, to obtain a reduction of the Contradiction. To feedback the paraconsistent decision system the variation of the Evidences must be done as follows: For Interval of Certainty with a positive signal: φ = φ (+) Reduce λ If DC > 0 Else reduce μ For Interval of Certainty with a negative signal: φ = φ (-) Increase μ If DC > 0 Else increase λ
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
82
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
3.3 Algorithms for Uncertainty Treatment through Paraconsistent Analysis With the considerations presented so far we can compute values using the equations obtained and construct a Paraconsistent Analysis System able to offer satisfactory answers with information fetched from uncertain knowledge data base.
Figure 3.20 Paraconsistent Analysis System or Node for uncertainty treatment
The Uncertainty Treatment Paraconsistent System may be utilized in several fields of knowledge where contradictory and incomplete information will receive an adequate treatment through the PAL2v equations discussed so far. We now present the Paraconsistent Analysis Algorithm to find the value of Real Degree of Certainty and of the Interval of Certainty:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
3.3.1 PAL2v Paraconsistent Analysis Algorithm with Resultant Degree of Certainty output
PAL2v Analysis
μ
DCR φ(±)
λ
1. Enter Input values μ */ Favorable Degree of Evidence 0 ≤ μ ≤ 1 λ */ Unfavorable Degree of Evidence 0 ≤ λ ≤ 1 2. Calculate the Degree of Contradiction Dct = (µ + λ) - 1 3. Calculate the Interval of Certainty φ = 1- |Dct |
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
83
4. Calculate the Degree of Certainty DC = µ - λ 5. Calculate distance d d = (1− | DC |) 2 + Dct 2 6. Determine the Output signal If φ ≤ 0,25 or d ≥ 1 Then Do S1= 0.5 and S2= φ: Indefinition and go to item 10 Else go to the next item 7. Determine the Real Degree of Certainty DCR = (1 - d) If DC > 0 DCR = (d - 1) If DC < 0 8. Determine the signals of the Interval of Certainty If µ+ λ > 1 Signal positive φ(±) = φ(+) If µ+ λ < 1 Signal negative φ(±) = φ(-) φ(±) = φ(0) If µ+ λ = 1 Signal zero 9. Present the outputs Do S1 = DCR and S2= φ(±) 10. End
The Paraconsistent Analysis Algorithm that does the analysis by determining the value of the Degree of Certainty and by finding the variation of the evidences to obtain the null Contradiction is presented as follows.
3.3.2 PAL2v Paraconsistent Analysis Algorithm to Estimate the Degrees of Certainty and Evidence input values
PAL2v Analysis
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ
DCest λf μf
λ
φ(±)
1. Enter Input values μ */ Favorable Degree of Evidence 0 ≤ μ ≤ 1 λ */ Unfavorable Degree of Evidence 0 ≤ λ ≤ 1 2. Calculate the Degree of Contradiction Dct= (µ + λ) - 1 3. Calculate the Interval of Certainty φ = 1- |Dct | 4. Calculate the Degree of Certainty DC = µ - λ 5. Calculate distance d d = (1 − ϕ )2 + Dct 2
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
84
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
6. Determine the Output signal If φ ≤ 0,25 or d ≥ 1 Then Do S1(DCest) = 0.5 and S2= φ: Indefinition and S3(∆λ )= 0.5 and S4(∆µ ) =0.5 go to item 9 Else go to the next item 7. Determine the value of the Estimated Degree of Certainty for a zero Degree of Contradiction. DCAdd = d 2 − Dct 2
If DC > 0 Calculate: DCest = DC + DCAdd If DC < 0 Calculate: DCest = DC - DCAdd 8. Determine the value of the Estimated Degree of Evidence to obtain the value of Estimated Degree of Certainty If Dct > 0 Calculate λf = λi - DCest For DC > 0 Else Calculate µf = λi + DCes If Dct > 0 Calculate μf = μi + DCes For DC < 0 Else Calculate λf = μi - DCes Do S1(DCest) = DCest and S2= φ(±) S3(∆λ) = λf and S4(∆µ ) = µf 9. End We now present the Paraconsistent Analysis Algorithm that recovers the calculated Degree of Certainty DC and the Evidences input values μ and λ to indicate which of them must be varied to obtain a Degree of Contradiction zero:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
3.3.3 PAL2v Paraconsistent Analysis Algorithm with feedback
PAL2v Analysis
μ
DCR φ(±)
λ
1. Verify the Output values DCR */ Real Degree of Certainty φ(±) */ Signaled Interval of Certainty 2. Calculate the Degree of Contradiction Dct = 1 - φ(±) 3. Calculate the Degree of Certainty If DCR > 0 Do:
-1 ≤ D CR ≤ +1 0 ≤ φ(±) ≤ 1
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
85
DC = 1 − (1 − DCR ) 2 − Dct 2
Else: DC = (DCR + 1) 2 − Dct 2 − 1
4. Calculate the Degrees of Evidence D + Dct + 1 - DC + Dct + 1 and λ = μ= C 2 2 5. Present the output according to the condition For φ = φ (+) If DC > 0 Decrease in λ Else Decrease in μ If DC > 0 Increase in μ For φ = φ (-) Else Increase in λ 6. Execute a new Paraconsistent Analysis (PAL2v Paraconsistent Analysis Algorithm) and present the new results Do S1 = DCR and S2= φ(±) 7. End
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
3.4 Final Remarks We have seen in this chapter that through the interpretation of the lattice associated to PAL2v it is possible to construct different Para-Analyzer algorithms presented in the previous chapter. The Paraconsistent Analysis Systems built with several kinds of PAL2v algorithms are able to accomplish a better signal treatment that represents contradictory information; and therefore, bring paraconsistent analyses closer to reality. This methodology which is able to treat signals from uncertainty knowledge database offers the possibility of designing projects and more powerful tools for the treatment of uncertain and contradictory information. In the following chapters we will see Paraconsistent Analysis Systems or Nodes (PANs) with output signals represented by Degrees of Evidence, which enables them to be interconnected forming decision-making network configurations. PANs, composed by algorithms originated from the interpretations methods of the Paraconsistent Logic lattice will do the analysis of the proposition in the network. To make this possible, the signals that will represent the evidences in respect to each analyzed Proposition will undergo a normalization process. In this way all the paraconsistent analysis processing will be done with input and output values in the closed real interval between 0 and 1.
Exercises 3.1 How can data consider imperfect be defined? 3.2 What data are considered imperfect? 3.3 What are the main characteristics of a system able to treat uncertain information? 3.4 What are the main criteria a decision-making system must follow? 3.5 Suppose that an information source 1 presents a signal to the Paraconsistent Analysis System valued in 0.83, and information source 2 presents a signal valued
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
86
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
0.61. Represent the annotation and the Paraconsistent Signal considering source 2 as the one that supplies Unfavorable evidence. 3.6 Suppose that an information source 1 presents to the Paraconsistent Analysis System, a signal valued in 0.37, and information source 2 presents a signal valued 0.91. Determine the values of the Degrees of Certainty and of Contradiction considering that information source 2 as the one that supplies Unfavorable evidence. 3.7 Suppose that an information source 1 presents a signal to the Paraconsistent Analysis System valued in 0.8, and information source 2 presents a signal valued 0.8 a) Represent the notation and the Paraconsistent signal considering source 2 as the one that supplies Unfavorable evidence. b) For the paraconsistent signal obtained determine the values of the Degrees of Certainty and of Contradiction. 3.8 Suppose that a Paraconsistent Analysis System is receiving information from two sources whose values are: Information source 1 equal to μ1 = 0.72 Information source 2 equal to μ 2 = 0.43 a) Determine the Degrees of Certainty DC and of Contradiction Dct of the analysis. b) Determine the Interval of Certainty φ of the analysis. c) Determine the maximum value Degree of Certainty tending to True DCmaxt, and the maximum value Degree of Certainty tending to False DCmaxF. d) Present the results of this analysis through the Resultant Degree of Certainty and its Signaled Interval of Certainty φ (±). 3.9 Suppose that two information sources send the following values: μ1 = 0.73 Degree of Evidence supplied by Source 1 μ2 = 0.32 Degree of Evidence supplied by Source 2 a) Determine the Degrees of Certainty DC and of Contradiction Dct of the analysis. b) Calculate the value of the Estimated Degree of Certainty DCest that is, the maximum value that can be obtained with the reduction of the Degree of Contradiction to zero. c) Determine which evidence must be varied to obtain the Estimated Degree of Certainty DCest of the previous item. d) Calculate the value of the Degree of Evidence to obtain the Estimated Degree of Certainty DCest. 3.10 Suppose that two information sources send the following values: μ1 = 0.79 Degree of Evidence supplied by Source 1 μ2 = 0.87 Degree of Evidence supplied by Source 2 a) Determine the Degrees of Certainty DC and of Contradiction Dct of the analysis. b) Calculate the value of the Estimated Degree of Certainty DCest that is, the maximum value that can be obtained with the reduction of the Degree of Contradiction to zero. c) Determine which evidence must be varied to obtain the Estimated Degree of Certainty DCest of the previous item. d) Calculate the value of the Degree of Evidence to obtain the Estimated Degree of Certainty DCest. 3.11 Suppose that two information sources send the following values: μ1 = 0.41 Degree of Evidence supplied by Source 1 μ2 = 0.29 Degree of Evidence supplied by Source 2 a) Determine the Degrees of Certainty DC and of Contradiction Dct of the analysis. b) Calculate the value of the Estimated Degree of Certainty DCest that is, the maximum value that can be obtained with the reduction of the Degree of Contradiction to zero. c) Determine which evidence must be varied to obtain the Estimated Degree of Certainty DCest of the previous item.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Chapter 3. Fundamentals of Paraconsistent Analysis Systems
87
d) Calculate the value of the Degree of Evidence to obtain the Estimated Degree of Certainty DCest. 3.12 Suppose that two information sources send the following values: μ1 = 0.24 Degree of Evidence supplied by Source 1 μ2 = 0.33 Degree of Evidence supplied by Source 2 a) Determine the Degrees of Certainty DC and of Contradiction Dct of the analysis. b) Calculate the value of the Estimated Degree of Certainty DCest that is, the maximum value that can be obtained with the reduction of the Degree of Contradiction to zero. c) Determine which evidence must be varied to obtain the Estimated Degree of Certainty DCest of the previous item. d) Calculate the value of the Degree of Evidence to obtain the Estimated Degree of Certainty DCest. 3.13 Suppose that two information sources send the following values: μ1 = 0.83 Degree of Evidence supplied by Source 1 μ2 = 0.71 Degree of Evidence supplied by Source 2 a) Determine the Degrees of Certainty DC and of Contradiction Dct of the analysis. b) Calculate the value of Real Degree of Certainty DCR. c) Calculate the Signaled Interval of Certainty. 3. 14 Suppose that two information sources send the following values: μ1 = 0.18 Degree of Evidence supplied by Source 1 μ2 = 0.36 Degree of Evidence supplied by Source 2 a) Determine the Degrees of Certainty DC and of Contradiction Dct of the analysis. b) Calculate the value of Real Degree of Certainty DCR. c) Calculate the Signaled Interval of Certainty. 3. 15 A paraconsistent analysis system presents an output signal of Degree of Resultant Real Certainty represented by the values: DCR = Real Degree of Certainty equal to 0.57 φ(±) = Signaled Interval of Certainty equal to 0.83 with positive signal Determine the Degrees of Certainty DC and of Contradiction Dct of the System under these conditions. 3.16 A paraconsistent analysis system presents an output signal of Degree of Resultant Real Certainty represented by the values: RDC = Real Degree of Certainty equal to 0.67 φ(±) = Signaled Interval of Certainty equal to 0.91 with positive signal Determine the Degrees of Certainty DC and of Contradiction Dct of the System under these conditions. 3.17 A paraconsistent analysis system presents an output signal of Degree of Resultant Real Certainty represented by the values: RDC = Real Degree of Certainty equal to 0.66 φ(±) = Signaled Interval of Certainty equal to 0.93 with positive signal Determine the Favorable μ and Unfavorable λ Degrees of Evidence applied as inputs that generated these output values. 3.18 Utilize a common programming language and develop a computational program with a PAL2v Paraconsistent Analysis Algorithm with Resultant Degree of Certainty output. 3.19 Utilize a common programming language and develop a computational program with a PAL2v Paraconsistent Analysis Algorithm to estimate the input values of Degrees of Certainty and of Evidences. 3.20 Utilize a common programming language and develop a computational program with a PAL2v Paraconsistent Analysis Algorithm with feedback.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
88
CHAPTER 4
Paraconsistent Analysis Systems Configurations Introduction In this chapter we present several decision network configurations composed of algorithms originated from the interpretation method of PAL2v. We will see that the representative algorithm, obtained through the methodology and the interpretation of the PAL2v lattice, seen in the previous chapters are now considered as Paraconsistent Analysis Systems or Nodes (PAN). The analysis networks formed with PAN are able to treat signals originated from uncertain knowledge database, which may bring contradictory and incomplete information.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
4.1 Typical Paraconsistent Analysis Node (PAN) In the previous chapter we saw that through PAL2v interpretation methodology we constructed algorithms able to produce signal treatment and control of uncertain and contradictory information. The Paraconsistent Algorithms which were called Paraconsistent Analysis Nodes (PANs) will now be interconnected to compose decision-making analysis networks with different topologies. In the Paraconsistent Analysis Networks, the PAN treats information signals in accordance to the fundamentals of Paraconsistent Logic. With input Degrees of Evidences extracted from uncertain knowledge database, the PANs utilize the equations from PAL2v methodology to obtain the Real Degrees of Certainty DCR accompanied by their respective Interval of Certainty φ. This process enables the conclusions in respect to certain propositions. According to the previous chapter, each system or Paraconsistent Analysis Node (PAN) is able to receive evidences and supply a certainty value accompanied by its Interval of Certainty. Therefore, we consider a Paraconsistent Analysis Node (PAN) as being a Paraconsistent Analysis System which receives input Degrees of Evidences and supplies two values: one that represents the Real Degree of Certainty D CR and another, its Signaled Interval of Certainty φ(±). Thus, a typical Paraconsistent Analysis Node (PAN) is constructed by “PAL2v Paraconsistent Analysis Algorithm”, and may be represented by a block diagram as seen in figure 4.1. A PAN may contain all the three algorithms studied in the previous chapter: the Paraconsistent Analysis Algorithm, the Algorithm of Estimation of Certainty and Evidence Values, and the Paraconsistent Analysis Algorithm with Feedback.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 4. Paraconsistent Analysis Systems Configurations
n Favorable Evidences μn
89
PAL2v Analysis DCR
n Unfavorable Evidences λn
φ(±) PAL2v Paraconsistent Analysis Algorithm
Figure 4.1 Representation in blocks of a typical Paraconsistent Analysis Node.
The use of one or other algorithm will depend on the following conditions: 1- The type or nature of application; 2- The Proposition that will be analyzed; 3- Desired topology of the decision-making paraconsistent network. The symbolic representation of a PAN is presented in figure 4.2 where we have two inputs of Favorable μ and Unfavorable λ Degrees of Evidence in respect to the Proposition analyzed, and two result outputs: the Real Degree of Certainty D CR and the Interval of Certainty symbolized by φ(±).
μ PAL2v
DCR φ(±)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
λ Figure 4.2 Symbolic representation of a typical Paraconsistent Analysis Node.
4.1.1 Paraconsistent Analysis Node-PAN rules According to what was seen, each Paraconsistent Analysis Node (PAN) receives annotations in the form of representative values of Favorable μ and Unfavorable λ Degrees of Evidence to a certain proposition P. Therefore, in each PAN we will do the analysis of just one proposition, in which the evidence signals must be treated based on the concepts of PAL2v. Thus, we must consider some rules as listed below: 1- In Paraconsistent Analysis Nodes (PANs) one must not add, subtract, nor consider the averages of: Input Degrees of Evidences μ and λ, Degrees of Certainty DC, Real Degrees of Certainty DCR, or Degrees of Contradiction Dct. 2- One must not add, subtract, nor consider the averages of the values Interval of Certainty φ.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
90
Chapter 4. Paraconsistent Analysis Systems Configurations
3- The Resultant Degrees of Certainty values must only be strengthened or weakened through the input of new Evidences (new values). 4- The strengthening or weakening of the Resultant Degree of Certainty by means of complementary evidences should only be done up to the limit established by the Interval of Certainty. Beyond this value the Evidences must be adjusted with new values so that the contradiction may be reduced. 5- The values of the Degrees of Evidences may be adjusted simultaneously to increase or reduce the Resultant Degree of Certainty.
4.1.2 Transformation of the Real Degree of Certainty into Resultant Degree of Evidence
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Since a Paraconsistent Analysis produces values of Real Degree of Certainty in the closed interval between -1 and +1, to transform a Resultant Degree of Certainty from the analysis of a proposition into Degree of Evidence for another proposition the normalization of the values is done in the following way: As the Degree of Certainty is calculated through equation (2.2) where: DC = μ – λ, then the Resultant Degree of Evidence may be obtained as: D +1 (4.1) µE = C 2 Or still: (µ - λ ) + 1 µE = (4.2) 2 Where: µE = Resultant Degree of Evidence μ = Favorable Degree of Evidence λ = Unfavorable Degree of Evidence The value of the Resultant Degree of Evidence obtained through equation (4.2) will now have a variation in the closed real interval between 0 and 1. Figure 4.3 shows the equivalence between the values of Degrees of Certainty DC and Resultant Degrees of Evidence µE obtained from equation (4.2).
-1.0
-0.75
-0.50
-0.25
0.0
+0.25
+0.50
+0.75
+1.0 D C
µE 0.0
0.25
0.50
0.75
1.0
Figure 4.3 Transformation of Degree of Certainty DC into Resultant Degree of Evidence μE.
We verify that after the transformation obtained by normalization, when the value of the Resultant Degree of Evidence is above 0.5, it means that there is an affirmation in respect to the Proposition analyzed. Therefore, the values of the da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 4. Paraconsistent Analysis Systems Configurations
91
Resultant Degrees of Evidence above 0.5 and tending to 1.0 suggest a True logical state to the Proposition analyzed. When the value of the Resultant Degree of Evidence is equal to 1.0 there is a total affirmation to the Proposition; therefore the logical state is totally True. On the other hand, when the value of the Resultant Degree of Evidence is below 0.5, this means that there is a refutation to the Proposition analyzed. Therefore, the values of the Resultant Degrees of Evidence below 0.5 and tending to zero suggest a Falsehood logical state to the Proposition analyzed. When the value of the Resultant Degree of Evidence is equal to zero, there is a total negation to the Proposition; therefore the logical state is totally False. The Resultant Degree of Evidence equal to 0.5 indicates Indefinition in respect to the Proposition analyzed. This resulting undefined condition indicates a Degree of Certainty zero, and it may be caused by high contradiction. This situation may be verified by the value of the Interval of Certainty.
4.1.3 Resultant Real Degree of Evidence μER As seen before, with the normalization, each Paraconsistent Analysis Node (PAN) for uncertainty treatment will produce an output signal of Resultant Degree of Evidence valued in the closed real interval between 0 and 1. The Resultant Real Degree of Evidence µER is the one considered when the calculated value of µE is attenuated, or free from the effects due to the existence of contradiction. The values of the Resultant Real Degree of Evidence will be obtained by calculating initially the value of Real Degree of Certainty through equations (3.20) and (3.21) reproduced below: For DC > 0 DCR = 1 − (1− | DC |) 2 + Dct 2
For DC < 0
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
DCR = (1− | DC |) 2 + Dct 2 − 1
From these equations we determine the resultant real Degree of Evidence by: D +1 µER = CR (4.3) 2 --------------------------Example 4.1 Consider a Paraconsistent Analysis Node - PAN which receives two input Degrees of Evidence: Favorable Degree of Evidence μ = 0.79 Unfavorable Degree of Evidence λ = 0.28 Determine the Resultant Degree of Evidence and the Resultant Real Degree of Evidence of the analysis done by PAN. Resolution From equation (4.2) we calculate the Degree of Evidence of the analysis: (0.79 - 0.28) + 1 µE = 2 µE = 0.755 From equation (2.2) we calculate the Degree of Certainty: DC = 0.79 – 0.28
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
92
Chapter 4. Paraconsistent Analysis Systems Configurations
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
DC = 0.51 From equation (2.3) we calculate the Degree of Contradiction: Dct = (0.79 + 0.28)) - 1 Dct = 0.07 Since the DC > 0 we calculate the Real Degree of Certainty from equation (3.20): DCR = 1 − (1− | 0.51|) 2 + 0.07 2 D CR = 1 - 0.4949747468 D CR = 0.505025 From the value of the Real Degree of Certainty we calculate the value of the Real Degree of Evidence through equation (4.3): 0.505025 + 1 µER = 2 µER = 0.7525126 --------------------------Example 4.2- Consider a Paraconsistent Analysis Node - PAN which receives two input Degrees of Evidence: Favorable Degree of Evidence μ = 0.37 Unfavorable Degree of Evidence λ = 0.78 Determine the resultant Degree of Evidence and the resultant real Degree of Evidence of the analysis done by PAN. Resolution From equation (4.2) we calculate the Degree of Evidence of the analysis: (0.37 - 0.78) + 1 µE = 2 µE = 0.295 From equation (2.2) we calculate the Degree of Certainty: DC = 0.37 – 0.78 DC = - 0.41 From equation (2.3) we calculate the Degree of Contradiction Dct = (0.37 + 0.78) - 1 Dct = 0.15 Since the DC < 0 we calculate the Real Degree of Certainty through equation (3.21): DCR = (1− | 0.41|)2 + 0.152 − 1 D CR = 0.60876925 - 1 D CR = -0,3912307 From the value of the Real Degree of Certainty we calculate the value of the Real Degree of Evidence through equation (4.3): - 0.3912307 + 1 µER = 2 µER = 0.304384625
4.1.4 The Normalized Degree of Contradiction µctr To get a standardized response from a PAN the Degree of Contradiction goes through normalization and thus, the obtained resulting values will be in the closed real interval
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 4. Paraconsistent Analysis Systems Configurations
93
between 0 and 1. From equation (2.3): Dct = (μ + λ) – 1, the normalization of the Degree of Contradiction is done by: D +1 µctr = ct (4.4) 2 The equation (2.3) in (4.4) becomes: {(µ + λ ) - 1} + 1 µctr = 2 µ+λ µctr = (4.5) therefore: 2 where: µ ctr = Normalized Degree of Contradiction μ = Favorable Degree of Evidence λ = Unfavorable Degree of Evidence Figure 4.4 shows the relation between the values of the Degree of Contradiction Dct and the values of the Normalized Degree of Contradiction µctr. Dct +1.0 +0.75
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
µctr +0.50
1.0
+0.25
0.75
0.0
0.50
-0.25
0.25
-0.50
0.0
-0.75 -1.0
Figure 4.4 Transformation of the Degree of Contradiction Dct into Normalized Degree of Contradiction μctr
According to the previous figure, we verify that after the normalization, when there is no contradiction between the input Degrees of Evidences, the value of the Normalized Degree of Contradiction is 0.5. da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
94
Chapter 4. Paraconsistent Analysis Systems Configurations
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
When the value of the Normalized Degree of Contradiction is above 0.5, it means that there is a contradiction between the input Degrees of Evidence in respect to the Proposition analyzed. Therefore, the values of the Normalized Degrees of Contradiction above 0.5 and tending to 1.0 suggest an Inconsistent logical state to the Proposition analyzed. When the value of the Normalized Degree of Contradiction is equal to 1.0 there is a total contradiction to the Proposition, therefore, the logical state is totally Inconsistent. On the other hand, when the value of the Normalized Degree of Contradiction is below 0.5 it means that there is also a contradiction to the Proposition analyzed. Therefore, the values of the Normalized Degrees of Contradiction below 0.5 and tending to zero indicate that the evidences are contradictory, suggesting a logical state of Indetermination to the Proposition analyzed. When the value of the Normalized Degree of Contradiction is equal to zero there is a total contradiction, therefore, the logical state totally indeterminate. ---------------------------Example 4.3 Consider a Paraconsistent Analysis Node (PAN) which receives two input Degrees of Evidence: Favorable Degree of Evidence μ = 0.79 Unfavorable Degree of Evidence λ = 0.36 Determine the Resultant Degree of Evidence, the Resultant real Degree of Evidence and the Normalized Degree of Contradiction of the analysis done by the PAN. Resolution We represent the annotation (μ, λ) as: (0.79, 0.36) The Paraconsistent Signal is represented as follows: P(0.79, 0.36) From equation (4.2), we calculate the Degree of Evidence of the analysis: (0.79 - 0.36) + 1 µE = 2 µ E = 0.715 From equation (2.2), we calculate the Degree of Certainty: DC = 0.79 – 0.36 DC = 0.43 From equation (2.3), we calculate the Degree of Contradiction Dct = (0.79 + 0.36) - 1 Dct = 0.15 Since the DC > 0, we calculate the Real Degree of Certainty through equation (3.21): DCR = 1 − (1− | 0.43 |)2 + 0.152 D CR = 1 - 0.58940648 D CR = 0.4105935 From the value of the Real Degree of Certainty we calculate the value of the Real Degree of Evidence through equation (4.3): 0.4105935 + 1 µER = 2 µER = 0.705296759
The Normalized Degree of Contradiction is calculated through equation (4.5):
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 4. Paraconsistent Analysis Systems Configurations
µctr =
95
0.79 + 0.36 2
µ ctr = 0.575 ----------------------------
4.1.5 The Resultant Interval of Evidence φE According to what was seen, the value of the Degree of Contradiction of a Paraconsistent Analysis is calculated through equation (2.3) reproduced below: D ct= (µ + λ) - 1 We also find the value of the Normalized Degree of Contradiction through equation (4.4): D +λ µctr = ct 2 From equation (4.4), we can find the relation: D ct = 2μctr – 1
(4.6)
Since determination of the Interval of Certainty value φ done through the equation (3.1) reproduced below: φ = 1- |D ct| By doing (3.1) in (4.6) the value of the Interval of Evidence φE is determined by:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
φ E = 1- |2μctr – 1|
(4.7)
In accordance to equation (4.7), when the Normalized Degree of Contradiction is equal to 1, indicating a high contradiction in the Paraconsistent Analysis with the Inconsistent logical state, the value of the Interval of Evidence is equal to zero. When the value of the Normalized Degree of Contradiction is 0, indicating a high contradiction in the Paraconsistent Analysis with the Indeterminate logical state, the value of the Interval of Evidence is also equal to zero. When the Normalized Degree of Contradiction is equal to 0.5, indicating there is no contradiction in the Paraconsistent Analysis, the value of the Interval of Evidence is equal to 1.
4.1.5.1 Relation between the Resultant Interval of Evidence φE and the Normalized Degree of Contradiction μctr With the significant values for the Normalized Degree of Contradiction μ ctr, and utilizing the previous equations, the corresponding values for the Interval of Certainty φE are found. A test that relates the Interval of Evidence with the Normalized Degree of Contradiction is shown below.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
96
Chapter 4. Paraconsistent Analysis Systems Configurations
With the obtained values exposed in the following table, we plot a graph of the characteristics of φE in relation to the Normalized Degree of Contradiction, according to figure 4.5.
µctr 1.0 μctr
φE
0.000 0.125 0.250 0.375 0.500 0.625 0.750 0.875 1.000
0.00 0.25 0.50 0.75 1.00 0.75 0.50 0.25 0.00
0.75
0.50
0.25
⊥
Non Contradiction
T
0.0 0.0
0.25
0.50
0.75
1.0
φE
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 4.5 Graph of the characteristics among the values of the Normalized Degrees of Contradiction and the values of the Resultant Intervals of Evidence.
We verify from the characteristic line of the graph that when the Resultant Interval of Evidence is maximum φE=1 the Normalized Degree of Contradiction μctr is 0.5, this equals a null contradiction. Maximum contradictions, represented by a value of Null Resultant Interval of Evidence, occur in two values of the Degree of Contradiction μctr: a) When the Normalized Degree of Contradiction is equal to zero, indicating a maximum contradiction by Indetermination. b) When the Normalized Degree of Contradiction is maximum, indicating a maximum contradiction by Inconsistency.
4.1.5.2 Relation between the Resultant Interval of Evidence φE and the Resultant Real Degree of Evidence μER According to what we saw in the equations, we verify that in extreme values, when the Normalized Degree of Contradiction is equal to 1, indicating a high Inconsistency, the Interval of Evidence is zero. This only happens when: µ=1 and λ=1, and in this case the Resultant Real Degree of Evidence will be 0.5.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 4. Paraconsistent Analysis Systems Configurations
97
When the Normalized Degree of Contradiction is equal to zero, indicating a high Indetermination, the value of the Resultant Interval of Evidence will also be zero. This only happens when: µ=0 e λ=0, and in this case the Resultant Real Degree of Evidence will be 0.5. This means that if the Resultant Interval of Evidence φ E is equal to zero, in any situation the value of the Resultant Degree of Evidence will be equal to 0.5. Therefore, the analysis may only present an Indefinition. When the Normalized Degree of Contradiction is equal to 0.5, the Resultant Interval of Evidence will be equal to 1. In this situation there will be two conditions: a) If the Unfavorable Evidence λ is greater than the Favorable Evidence µ the Resultant Degree of Evidence may present values lower than 0.5 and greater or equal to zero; b) If the Favorable Evidence µ is greater than the Unfavorable Evidence λ the Resultant Degree of Evidence may present values greater than 0.5 and lower or equal to 1.0. This second condition means that when the Resultant Interval of Evidence is equal to 1 the Resultant Degree of Evidence will be free to vary between the value that represents maximum of Falsehood µctr= 0 up to the value that represents maximum of Truth µctr=1. We have demonstrated that the value of the Resultant Degree of Evidence will only be totally free when the Resultant Interval of Evidence φ E is 1. Therefore, analysing the significant values of the Interval of Evidence φE with those obtained for the Resultant Degree of Evidence µER we obtain a relation between these two features. Figure 4.6 shows the table and the graph with the line segment of the characteristic found of the Resultant Interval of Evidence φE related with the values of µER µER 1.0
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
φE 0.75
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
0.50
0.25
μER 0.000 0.125 0.250 0.325 0.500 0.675 0.750 0.875 1.000
F
I
t
0.0 0.0
0.25
0.50
0.75
1.0
φE
Figure 4.6. Graph of the characteristics among values of the Resultant Real Degrees of Evidence related with the values of the Resultant Intervals of Evidence.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
98
Chapter 4. Paraconsistent Analysis Systems Configurations
In the characteristic segment line we verify that the Resultant Interval of Evidence φE represents exactly the permitted value for the variation of the Resultant Degree of Evidence. This variation is permitted in situation of Contradiction represented by the value of the Normalized Degree of Contradiction. In an analysis network, the value of the Resultant Interval of Evidence in the Paraconsistent Analysis Node (PAN) is utilized to inform what maximum values of the Output Degrees of Evidence may be obtained with the level of contradiction in the analysis. From the value of Interval of Evidence, the maximum Degree of Evidence tending to logical state True is obtained through: 1 + ϕE µEmax t = (4.8) 2 From the value of the Interval of Evidence, the maximum Degree of Evidence tending to logical state False is obtained through: 1 − ϕE µEmax F = (4.9) 2 Figure 4.7 shows the location of the maximum values of the Output Degree of Evidence for certain values of the Interval of Evidence.
-1.0
-0.75
-0.50
-0.25
0.0
+0.25
+0.50
+0.75
+1.0
DC
µE 0.0
0.25
0.50
0.75
1.0
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
φE μEmaxF = 1 - φE 2 Maximum value tending to False Refutes Proposition P
Variation of the Degrees of Evidence
μEmaxt = 1 + φE 2 Maximum value tending to True Confirms Proposition P
Figure 4.7 Location of maximum values of the Degrees of Evidence μE for certain values of the Resultant Interval of Evidence φE.
To use the Resultant Interval of Evidence in the recovering of the values, this Interval will also be signaled, being then represented by φE(±). The signal is obtained φE(±), = φE(+) as follows: If μctr > 0.5 φE(±), = φE(-) If μctr < 0.5 Thus, the value of the Normalized Degree of Contradiction may be found from the Resultant Signaled Interval of Evidence through equation:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 4. Paraconsistent Analysis Systems Configurations
If φE(±), = φE(+)
µctr =
1 + (1 − ϕ E ) 2
(4.10)
µctr =
1 − (1 − ϕ E ) 2
(4.11)
If φE(±), = φE(-)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
99
---------------------------Example 4.4 Consider a Paraconsistent Analysis Node - PAN which is receiving two input Degrees of Evidence: Favorable Degree of Evidence μ = 0.92 Unfavorable Degree of Evidence λ = 0.46 a) Determine the Resultant Degree of Evidence, the Resultant Real Degree of Evidence, the Normalized Degree of Contradiction and the Resultant Interval of Evidence of the analysis done by PAN. b) Determine the maximum values of the Output Degree of Evidence when this level of contradiction is maintained in the input information. c) Present the Resultant Signaled Interval of Evidence. d) From the value of the Interval of Evidence obtained, determine the value of the Normalized Degree of Contradiction. Resolution a) We represent the annotation (μ, λ) as: (0.92, 0.46) The Paraconsistent Signal is represented as follows: P(0.92, 0.46) From equation (4.2), we calculate the Degree of Evidence of the analysis: ( 0.92 - 0.46 ) + 1 µE = 2 µ E = 0.73 From equation (2.2), we calculate the Degree of Certainty: DC = 0.92 – 0.46 DC = 0.46 From equation (2.3), we calculate the Degree of Contradiction Dct = (0.92 + 0.46) - 1 Dct = 0.38 Since the DC > 0, we calculate the real Degree of Certainty through equation (3.21): DCR = 1 − (1− | 0.46 |)2 + 0.382 D CR = 1 - 0.66030296 D CR = 0.339697039 From the value of the Real Degree of Certainty, we calculate the value of the Real Degree of Evidence through equation (4.3): 0.339697039 + 1 µER = 2 µER = 0.6698485196 The value of the Normalized Degree of Contradiction is calculated through equation (4.5): 0.92 + 0.46 µctr = 2 µctr = 0.69 The value of the Resultant Interval of Evidence is calculated through equation (4.7)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
100
Chapter 4. Paraconsistent Analysis Systems Configurations
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
φE = 1 - |2 x 0.69 – 1| φE = 0.62 b) Through equation (4.8) we calculate the maximum Degree of Evidence 1 + 0.62 µEmax t = tending to logical state True: 2 μ Emaxt = 0.81 Through equation (4.9) we calculate the maximum Degree of Evidence 1 − 0.62 tending to logical state False: µEmaxF = 2 μ EmaxF = 0.19 c) As the Normalized Degree of Contradiction is greater than 0.5, then the Interval of Evidence will be signaled positively: φ E(±), = φE(+) = 0.62 (+) d) As the Interval of Evidence is signaled positively the value of the Normalized Degree of Contradiction is calculated through equation (4.10). 1 + (1 − 0.62) µctr = 2 μctr = 0.69 --------------------------Example 4.5 We know that in a Paraconsistent Analysis Node - PAN when the Degree of Contradiction is above 0.75 the Real Degree of Certainty will be null. Suppose a Paraconsistent Analysis Node - PAN is receiving two input Degrees of Evidence of contradictory values, producing thus, a Degree of Contradiction equal to 0.75. Under these conditions determine: a) The Normalized Degree of Contradiction; b) The Resultant Interval of Evidence of the analysis done by PAN; c) The limit values of the Output Resultant Degree of Evidence when there is this level of contradiction in the input information. d) The limit values of the Normalized Degree of Contradiction for this level of contradiction between the input information. Resolution a) The value of the Normalized Degree of Contradiction is calculated through equation (4.4): 0.75 + 1 µctr = 2 µctr = 0.875 The value of the Resultant Interval of Evidence is calculated through equation (4.7) φE = 1- |2 x 0.875 – 1| φE = 0.25 b) Through equation (4.8) we calculate the maximum Degree of Evidence 1 + 0.25 µEmax t = tending to a True logical state: 2 μ Emax t = 0.625 c) Through equation (4.9) we calculate the maximum Degree of Evidence 1 − 0.25 µEmaxF = tending to a False logical state: 2 μEmaxF = 0.375
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 4. Paraconsistent Analysis Systems Configurations
101
d) The limit values of the Normalized Degree of Contradiction in this situation will be obtained through equations (4.10) (4.11): 1 + (1 − 0.25) µctr = 2 μctr = 0.875 if the contradiction is from the inconsistent logical state 1 − (1 − 0.25) µctr = 2 μctr = 0.125 if the contradiction is from the indeterminate logical state ----------------------------
4.1.5.3 Representation of the output Degree of Evidence This way, as it was done with the Degree of Certainty, the output of a Paraconsistent Analysis Node (PAN) may be adequately represented by the Resultant Degree of Evidence and its Resultant Interval of Evidence. And thus, when it receives the values of Evidences an Uncertainty Treatment Paraconsistent Analysis Node will produce the output signal represented in the following way: ⎡µE µEs = ⎢ ⎣ϕE
where:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μEs = Output Degree of Evidence μE = Resultant Degree of Evidence φE = Interval of evidence The final representation may be done with greater precision, permitting more efficiency of the algorithms in computational projects. The output representation is composed by a Resultant Real Degree of Evidence and the Resultant Signaled Interval of Evidence. This representation is as follows: ⎡µER µEs = ⎢ ⎢⎣ϕE( ± )
where: μEs = Output Degree of Evidence μER = Resultant Real Degree of Evidence calculated through equation (4.3) reproduced below: D +1 µER = CR 2 DCR = Real Degree of Certainty obtained through equations (3.20) and (3.21). φE(±) = Signaled Interval of Evidence, obtained through equation (4.7) reproduced below: φ E = 1- |2μctr – 1| μctr = Normalized Degree of Contradiction obtained through equation (4.5). With the signals: φ E(-) if μctr < 0.5 φ E(+) if μctr > 0.5
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
102
Chapter 4. Paraconsistent Analysis Systems Configurations
4.2 The Algorithms of the Paraconsistent Analysis Nodes (PANs) The Calculus of the Resulting Degree of Evidence may now be added to the Paraconsistent Analysis Algorithm. A Paraconsistent Analysis Node or System (PAN) contains PAL2v algorithms with all the equations studied, according to figure 4.8.
Figure 4.8 Paraconsistent Analysis Node or System - PAN with Input Evidences, and Output Degree of Certainty transformed into Resultant Degree of Evidence.
Paraconsistent Analysis Systems algorithms are shown as follows.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
4.2.1 PAL2v Paraconsistent Analysis Algorithm with Resultant Real Degree of Evidence output
PAL2v Analysis
μ
μER φ(±)
λ
1. Enter Input values μ */ Favorable Degree of Evidence 0 ≤ μ ≤ 1 λ */ Unfavorable Degree of Evidence 0 ≤ λ ≤ 1 2. Calculate the Degree of Contradiction Dct = (µ + λ) – 1
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 4. Paraconsistent Analysis Systems Configurations
103
3. Calculate the Interval of Certainty φ = 1- |Dct | 4. Calculate the Degree of Certainty DC = µ - λ 5. Calculate distance d d = (1− | DC |) 2 + Dct 2 6. Determine the Output signal If φ ≤ 0,25 or d ≥1 Then Do: S1= 0.5 and S2= φ: Indefinition and Go to item 11 Else go the next item 7. Determine the Real Degree of Certainty DCR = (1 - d) If DC > 0 DCR = (d - 1) If DC < 0
8. Determine the Interval of the Degrees of Certainty signal Signal negative φ(±) = φ(-) If Dct < 0 Signal positive φ(±) = φ(+) If Dct > 0 Signal zero φ(±) = φ(0) If Dct = 0 9. Calculate the Resultant Real Degree of Evidence D +1 µER = CR 2 10. Present the output results Do S1 = μER and S2= φ(±) 11. End
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
4.2.2 PAL2v Paraconsistent Analysis Algorithm with calculus of the Normalized Degree of Contradiction and Interval of Evidence
PAL2v Analysis
μ
μER φ(±)
λ μct
1. Enter Input values μ */ Favorable Degree of Evidence 0 ≤ μ ≤ 1 λ */ Unfavorable Degree of Evidence 0 ≤ λ ≤ 1 2. Calculate the Normalized Degree of Contradiction µ+λ µctr = 2
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
104
Chapter 4. Paraconsistent Analysis Systems Configurations
3. Calculate the Resultant Interval of Evidence φE = 1- |2μctr -1| 4. Calculate the Degree of Certainty DC = µ - λ 5. Calculate the Degree of Contradiction Dct = (µ + λ) - 1 6. Calculate distance d d = (1− | DC |) 2 + Dct 2 7. Determine the Real Degree of Certainty DCR = (1 - d) If DC > 0 DCR = (d - 1) If DC < 0 8. Determine the output signal If φE ≤ 0.25 or d ≥ 1 Then do S1= 0.5 e S2= φE : Indefinition and Go to item 12 Else go to the next item 9. Calculate the Resultant Real Degree of Evidence D +1 µER = CR 2 10. Determine the Resultant Interval of Evidence Signal If μctr < 0.5 Signal negative φE(±) = φE(-) If μctr > 0.5 Signal positive φE(±) = φE(+) φE(±) = φE(0) If μctr = 0.5 Signal zero 11. Present the output results Do S1 = μER S2= φE(±) and S3 = μctr 12. End
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
4.3 Final Remarks In this chapter we have studied Paraconsistent Analysis Nodes (PANs) thoroughly. We have considered ways to obtain several configurations capable of forming Uncertainty Treatment Networks by interconnecting Paraconsistent Analysis Nodes (PANs). The results obtained in the equations, through the examples, prove that a PAN may be utilized as generator of Degrees of Evidences for other propositions which are being analyzed by other PANs, and thus, form a network of interconnected PANs. To make the Paraconsistent Analysis possible we have presented in this chapter the Paraconsistent Analysis algorithms with the value normalization method, transforming the Resultant Degrees of Certainty in the analysis into Degrees of Evidences with values between 0 and 1. With these normalization processes we obtain at each PAN output a Resultant Real Degree of Evidence value accompanied by a Resultant Interval of Evidence value, which will indicate, through signals, how the level of contradiction in the analysis is. Thus, besides the information about the certainty in relation to the proposition, the Paraconsistent Analysis Network will also have conditions to estimate the capacity of analysis and control the results by means of feedback. This methodology, which was constructed from the interpretation of the lattice associated to PAL2v, presents an Uncertainty Treatment which is not limited by conflict weights. All the available information is utilized, even those that present
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 4. Paraconsistent Analysis Systems Configurations
105
contradiction. With these characteristics the Paraconsistent Decision Systems are robust in Uncertainty Treatment. Therefore, they display more ability to offer a result closer to reality.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Exercises 4.1 How is a Paraconsistent Analysis Node or System (PAN) defined? 4.2 Describe how an Uncertainty Treatment Paraconsistent Network is composed? 4.3 How is the interconnection between two PANs done in a Paraconsistent Analysis Network? 4.4 What is the meaning of Resultant Degree of Evidence? 4.5 What are the conditions that influence the configuration of a Paraconsistent Analysis Network (PANet)? 4.6 Number the rules that must be obeyed for the Uncertainty Treatment in a PAN. 4.7 How is the transformation of the Real Degree of Certainty into Resultant Degree of Evidence? 4.8 Suppose a Paraconsistent Analysis Node (PAN) is receiving two input Degrees of Evidence: Favorable Degree of Evidence μ = 0.78 Unfavorable Degree of Evidence λ = 0.29 Determine the Resultant Degree of Evidence and the Resultant Real Degree of Evidence of the analysis done by PAN. 4.9 Suppose a Paraconsistent Analysis Node ( PAN) is receiving two inputs Degrees of Evidence: Favorable Degree of Evidence μ = 0.38 Unfavorable Degree of Evidence λ = 0.82 Determine the Resultant Degree of Evidence and the Resultant Real Degree of Evidence of the analysis done by PAN. 4.10 Give the meaning of “Normalized Degree of Contradiction”. 4.11 Suppose a Paraconsistent Analysis Node (PAN) is receiving two input Degrees of Evidence: Favorable Degree of Evidence μ = 0.81 Unfavorable Degree of Evidence λ = 0.37 Determine the Resultant Degree of Evidence, the Resultant Real Degree of Evidence and the Normalized Degree of Contradiction of the analysis done by PAN. 4.12 Give the meaning of “Resultant Interval of Evidence”. 4.13 Suppose a Paraconsistent Analysis Node (PAN) is receiving two input Degrees of Evidence: Favorable Degree of Evidence μ = 0.91 Unfavorable Degree of Evidence λ = 0.44 a) Determine the Resultant Degree of Evidence, the Resultant Real Degree of Evidence, the Normalized Degree of Contradiction and the Resultant Interval of Evidence of the analysis done by PAN; b) Determine the maximum values of the output Degree of Evidence when there is this level of contradiction in the input information; c) Present the Signaled Interval of Certainty; d) From the value of the Interval of Evidence found determine the value of the Normalized Degree of Contradiction.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
106
Chapter 4. Paraconsistent Analysis Systems Configurations
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
4.14 Suppose a Paraconsistent Analysis Node (PAN) is receiving two input Degrees of Evidence of contradictory values, producing a Degree of Contradiction equal to 0.68. In these conditions determine: a) The Normalized Degree of Contradiction; b) The Resultant Interval of Evidence of the analysis done by PAN; c) The limit values of the output Resultant Degree of Evidence when there is this level of contradiction in the input information; d) The limit values of the Normalized Degree of Contradiction for this level of contradiction between the input information. 4.15 Utilize a common programming language and develop a computational program with the PAL2v Paraconsistent Analysis Algorithm with Resultant Real Degree of Evidence output. 4.16 Utilize a common programming language and construct a computational program with the PAL2v Paraconsistent Analysis Algorithm with Calculus of the Normalized Degree of Contradiction and Interval of Evidence.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
107
CHAPTER 5
Modeling of Paraconsistent Logical Signals Introduction Paraconsistent Analysis Nodes (PANs) are the algorithms constructed with the fundamentals of PAL2v. These PANs receive information signals from uncertain knowledge and are connected by resultant signals of the analysis. For the input of representative signals of evidence, which will be treated by the PANs, a process of modeling and normalization of the values is done, according to the knowledge of each expert about the feature to be analyzed. The techniques and some forms of signal modeling that will represent the Evidences about the feature analyzed by the network are presented in this chapter.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
5.1 Contradiction and Paraconsistent Logic In the real world Contradiction is incoherence and in classical analysis systems it is a generator of uncertainties, this always results in refutation. In medicine for example, it is common for patients to receive two different diagnostics when they look for two professionals. In politics, in law, every time the opinions come from two or more experts, two different contradictory points of view come up; this leads to several interpretations of the laws. Contradiction only comes up when there are two or more information sources, therefore, if there is a unique information source in a real system there is no Contradiction. Thus, Contradiction concerns those who will make decisions. In the fields of Artificial Intelligence, where one wishes to project intelligent systems, Contradiction makes it difficult to obtain simulators with logical reasoning closer to human behavior. Paraconsistent Logic in uncertainty treatment is utilized to overcome the difficulties imposed by Contradiction. According to the previous chapters, Paraconsistent Logics belong to the group of logics called Non-Classical. Their main feature is to allow Contradiction treatment without trivialization; challenging the basic principles of Classical Logic. The fundamentals of Paraconsistent Logic are able to give a better treatment to situations uncovered by binary logic. The Paraconsistent Analysis Nodes (PANs), which represent the algorithms developed with the fundamental concepts of PAL2v, manipulate the Degrees of Certainty and of Contradiction more easily, even in the analysis of information collected from Uncertain Knowledge database. These systems are structured on Paraconsistent Logic and their way of reasoning must not be done with simplifications, neither by ignoring facts or
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
108
Chapter 5. Modeling of Paraconsistent Logical Signals
situations of inconsistencies. The analyses are done over real situations, taking the contradictions into account and thus succeeding in a complete description of the real world. The algorithms studied in the previous chapters show that by using the basic concepts of Paraconsistent Logic one can build Logical Systems of Analysis, Control and Decision Making Systems through computational programs and Hardware, which allow manipulation and reasoning with representative signals about Uncertain Knowledge information, these may be Incomplete, Ambiguous, Complex, Vague, and Contradictory. The following examples show how situations of inconsistency produced by contradictory information are considered in an analysis based on PAL2v. ------------------------Example 5.1 Consider a fictitious case where in a preliminary examination a medical professional measures the blood pressure of a patient to fill in the medical report. The measures are carried out with one single device M1. It is known that these devices lose their precision with time or by the number of times they are used and must be adjusted by a standard device. Considering this condition we ask: Bearing in mind that the deadline to have the device serviced is close, when using this information as input of a PAL2v Uncertainty Treatment System, is the existence of contradiction in the value inserted in the patient’s report considered?
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Resolution: The value obtained from the measurement using M1, despite the deadline being close or not, will always be considered correct. Therefore there is no Contradiction. ----------------------------In this example the information of the measurement referring to the patient’s blood pressure, when considered by Paraconsistent Logic, despite not presenting reliability, does not bring Contradiction because it comes from a unique Information source. ----------------------------Example 5.2 Consider a fictitious case where in a preliminary examination as in the previous example, one whishes to increase the reliability of the information that will be taken to the doctor about the patient’s arterial pressure. Another sphygmomanometer M2 was used. By using medical procedures to carry out the measurements, new data is added to the patient’s report: one blood pressure measurement using Device M1 and another using Device M2. For this situation we ask: Using this information as input of PAL2v Uncertainty Treatment System, how are the values inserted in the patient’s report considered? Resolution: The Paraconsistent Analysis System considers this information about the blood pressure measurement, which is now composed of two values, likely to present Contradiction. ----------------------------The measurement about the patient’s blood pressure, which lacked reliability in the previous example for supplying a unique measure, brings more information now. This additional information strengthens the reliability; consequently it brings the possibility of contradiction.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 5. Modeling of Paraconsistent Logical Signals
109
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
5.1.1 PAL2v Annotation Modeling According to what was seen before, the information signals that feed a Paraconsistent Analysis Network are the annotations. In Paraconsistent Annotated Logic with annotation of two values - PAL2v, an annotation is composed of two Degrees of Evidence: Favorable Degree of Evidence symbolized by the letter μ. Unfavorable Degree of Evidence symbolized by letter λ. Thus, a annotation referring to a Proposition P in PAL2v is the ordered pair: (μ, λ) A Paraconsistent Logical Signal is symbolized by a Proposition accompanied by its subscript annotation: P(μ, λ) Where: P is the Proposition to be analyzed. (μ, λ) is the annotation μ is the Paraconsistent Logical Value, or the Favorable Degree of Evidence to Proposition P. λ is the Paraconsistent Logical Value, or the Unfavorable Degree of Evidence to Proposition P. ------------------------Example 5.3 Like the fictitious case displayed in the previous example, suppose that in a patient’s medical report one finds two blood pressure measurements M1 and M2. For this situation we ask: How are these values considered in the applications of PAL2v in Paraconsistent Analysis Networks? Resolution: Through PAL2v, these two values obtained from two different devices are considered as two information sources as follows: Measurement of sphygmomanometer M1 = Evidence source 1 Measurement of sphygmomanometer M2 = Evidence source 2 The measurements obtained will be transformed into Paraconsistent Logical Values or Degrees of Evidence, in the interval of real numbers [0,1]. From the definition of the Proposition, the Degrees of Evidence, which will compose the annotation of the Paraconsistent Logical Signal, will be considered favorable, symbolized by letter μ, or unfavorable, symbolized by letter λ.
5.1.1.1 Paraconsistent Logical Value Modeling The valorization of the evidences is expressed by their Paraconsistent Logical Value or Degree; this is a number that belongs to the real numbers in the closed interval between 0 and 1. This number is mined from the original characteristics of the information sources, which establish their variation and behavior in a Universe of Discourse. If an information source is a human being, the Paraconsistent Logical Value is formed by their knowledge about the subject related to the Proposition. A human Expert may establish a value for a certain Proposition based on their professional experience knowledge. In this way, the first step to construct a Paraconsistent Analysis System, able to treat uncertainties, is modeling for mining Knowledge from the information sources.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
110
Chapter 5. Modeling of Paraconsistent Logical Signals
For a human Expert, the Degree of Evidence is formed by their level of knowledge, their experience or other available data to obtain a determined value and variation in the Universe of Discourse. If the information source is a measurement device, the range of values of the feature must be specified, establishing thus, a convenient Universe of Discourse for the analysis of the Proposition. The modeling to mine the Degrees of Evidence from the sources may be done in several ways, searching for the one that better adapts to the analysis of a Proposition one wishes to perform.
5.1.1.2 Linear variation Modeling In knowledge mining from information source, we can verify that the variation of the Evidences in respect to a certain Proposition is linear in a directly proportional fashion in the Universe of Discourse. In this case the valorization of the input Degrees of Evidence is done as follows: We consider a Universe of Discourse that goes from inferior limit value of the feature measurement symbolized by a1, up to the superior limit value of the feature measurement symbolized by a2, whose Evidence varies in a linear form and directly proportional to the value of the feature. Figure 5.1 shows the graph with the variation of the Degree of Evidence for this situation.
Degree of Evidence µ
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1
0.5
0 a1
a2
x Feature
Figure 5.1 Valorization of the Evidence with linear variation and directly proportional to the measured feature.
The Degree of Evidence µ, whose value varies from 0 to 1, in the Universe of Discourse, will be calculated by:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 5. Modeling of Paraconsistent Logical Signals
μ( x )
x − a1 a2 − a1
⎧ ⎪ ⎪ ⎪ =⎨ ⎪ ⎪ ⎪⎩
if x ∈
111
[ a1 , a2 ]
1
if
x > a2
0
if
x < a1
(5.1)
Based on the Universe of Discourse, if the variation of the evidences in respect to a certain Proposition is linear in an indirectly proportional fashion, the valorization of the input Degrees of Evidence is done as follows: Consider a Universe of Discourse that goes from the inferior limit value of the feature measure symbolized by a1, up to the superior limit value of the feature measure symbolized by a2, whose variation is linear and indirectly proportional to value of the feature. The graph showing the variation of the Degree of Evidence will be according to the one presented in Figure 5.2. Degree of Evidence µ
1
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
0.5
0 a1
a2
x Feature
Figure 5.2 Valorization of the Degree of Evidence with linear variation and indirectly proportional to the measured feature.
The Degree of Evidence µ, whose value varies from 0 to 1, in the Universe of Discourse, will be calculated by:
μ( x )
⎧ ⎪ ⎪ ⎪ =⎨ ⎪ ⎪ ⎪⎩
x − a2 a1 − a2
if x ∈
[ a1 , a2 ]
1
if
x < a1
0
if
x > a2
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
(5.2)
112
Chapter 5. Modeling of Paraconsistent Logical Signals
5.1.1.3 Non-Linear variation Modeling If the knowledge concerning the subject related to the Proposition we wish to analyze shows that the variations of evidence in the Universe of Discourse is non Linear, then the most adequate function must be chosen to treat the feature in question. As an example, one may choose a variation of the kind of function S, as shown in Figure 5.3. Degree of Evidence µ 1
0.5
0 a1
a1+ a2
a2
x Feature
2
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 5.3 Valorization of the Degree of Evidence with variation according to function S.
For this case, the modeling to obtain the variation of the Universe of Discourse will be done as follows: We consider a Universe of Discourse that goes from the inferior limit value of the feature measurement symbolized by a1, up to the value of the superior limit of the feature measurement symbolized by a2, with a non-linear variation of the feature measurements in the form of function S. In this case, the Degree of Evidence µ, whose value varies from 0 up to 1, in the Universe of Discourse, will be calculated by: ⎧ 0 if x < a1 ⎪ 2 ⎪ ⎛ x − a1 ⎞ ⎡a +a ⎤ ⎪ 2⎜ if a1 ≤ x ≤ ⎢ 1 2 ⎥ ⎟ ⎪ a a − ⎣ 2 ⎦ ⎝ 2 1⎠ ⎪ (5.3) μ( x ) = ⎨ 2 ⎡ a1 + a2 ⎤ ⎪ 1 − 2 ⎛ x − a2 ⎞ if ⎜ ⎟ ⎢ 2 ⎥ < x ≤ a2 ⎪ a2 − a1 ⎠ ⎣ ⎦ ⎝ ⎪ ⎪ 1 if x > a2 ⎪ ⎩ In the same way Function S was utilized, we can model Function Z, which is the complement of function S. Thus, the Degrees of Evidence values µ(x) obtained in
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 5. Modeling of Paraconsistent Logical Signals
113
the valorization that utilized the variation of function Z (Figure 5.4) are the complement of the values where the variation of function S is utilized (Figure 5.3). Degree of Evidence µ 1
0.5
0 a1
a1+ a2
a2
x Feature
2
Figure 5.4 Valorization of a Degree of Evidence with variation according to function Z.
The Degree of Evidence µ, whose value varies from 0 to 1, in the Universe of Discourse, will be calculated by:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ( x )
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ =⎨ ⎪ ⎪ ⎪ ⎪ ⎪⎩
0 ⎛ x − a1 ⎞ 1− 2⎜ ⎟ ⎝ a2 − a1 ⎠ ⎛ x − a2 ⎞ 2⎜ ⎟ ⎝ a2 − a1 ⎠ 1
if
x > a2
if
⎡a +a ⎤ a1 ≤ x < ⎢ 1 2 ⎥ ⎣ 2 ⎦
if
⎡ a1 + a2 ⎤ ⎢ 2 ⎥ ≤ x ≤ a2 ⎣ ⎦
if
x < a1
2
2
(5.4)
We present here some of the functions for the modeling of input Degrees of Evidence of PANs. Nonetheless, the Paraconsistent logical signals may be modeled with other functions, as those commonly used in other types of Non-Classical Logics, for example Fuzzy Logic.
5.1.2 Application of the Models for the Mining of the Degree of Evidence The Paraconsistent Logical Value may be obtained from several sources and fashions. It may be mined from measurement devices, information stored in memory banks, filtering of information considered relevant, or from data obtained through experimental and statistical research results. The generation of a Paraconsistent logical signal may also be done through a Linguistic Variable, as we will study next. Anyway,
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
114
Chapter 5. Modeling of Paraconsistent Logical Signals
a modeling is always done to mine knowledge, and from then on the Degrees of Evidences that compose a notation of PAL2v will feed the Analysis System.
5.1.2.1 Degrees of Evidence Mining
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Linguistic Variables are words, clauses or statements in natural or artificial language. The Paraconsistent Logical Value may be constructed from a Linguistic Variable, such that typical values of these are heuristically assigned from the Universe of Discourse. --------------------------Example 5.4 It is known that arterial pressure is measured in millimeters of mercury (mmHg) and is represented by two measures obtained by a sphygmomanometer. Systolic arterial pressure - SAP is indicated by the higher value, which corresponds to the pressure of the artery in the moment blood is pumped through the heart. Diastolic arterial pressure - DAP is indicated by the lower value, which corresponds to the pressure in the same artery in the moment the heart is resting after a contraction. According to medical literature, a patient is considered hypertensive if Systolic blood pressure is above 140 mmHg and/or Diastolic blood pressure is above 90 mmHg. A patient with good arterial pressure is the one whose measurement is 120 mmHg - SAP and 80 mmHg - DAP. Develop the Paraconsistent Logical Value graph for the following statement: “A patient is considered affected by serious Hypertension when the blood pressure measurement is High”. Resolution: Through research and common sense we may consider that the Linguistic Variable “High” means Systolic arterial pressure measurement - SAP around 150 mmHg, and Diastolic arterial pressure measurement - DAP around 100 mmHg. Based on the Universe of Discourse, which in the case of the example is for SAP from 90 to 150 mmHg, and for DAP from 80 to 100 mmHg, we may choose a variation that better adapts the feature in question. As an example, utilizing function S, the graph will be according to the following figures: Degrees of Evidence µ
Proposition P: Patient has Serious Hypertension.
1
0.5
0 120
135
150
mmHg Systolic Pressure Measures
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 5. Modeling of Paraconsistent Logical Signals
Degrees of Evidence µ
115
Proposition P: Patient has Serious Hypertension.
1
0.5
0 90
95
mmHg Diastolic Pressure Measures
100
--------------------------Example 5.5 We wish to design a Paraconsistent Analysis System having input Degrees of Evidence in respect to the Proposition: “The Patient is Hypertensive”. Project the linear valorization graph of the Degrees of Evidence for this Proposition.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Resolution: A patient has evidence of hypertension when Systolic pressure measurement – SAP is above 140 mmHg, and is considered normal when SAP is 120 mmHg. The graph is constructed as follows: Degrees of Evidence µ
Proposition P “The Patient is Hypertensive”
1 .
µ (x ) =
0
120
140
X – 120 140 - 120 1 0
,
if x
∈ [120, 140]
if x > 140 if x < 120
mmHg Systolic Pressure Measures
A patient has strong evidence of hypertension when Diastolic pressure measurement – DAP is above 120 mmHg, and is considered normal when DAP is 80 mmHg. The graph is constructed as follows:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
116
Chapter 5. Modeling of Paraconsistent Logical Signals
Degrees of Evidence µ
Proposition P “The Patient is Hypertensive”
1 .
µ (x ) =
0
80
90
X – 80 90 - 80 1 0
,
if x ∈ [80, 90] if x > 90 if x < 80
mmHg Diastolic Pressure Measures
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
--------------------------Example 5.6 Consider a fictitious case where in a preliminary examination room a Paraconsistent Analysis System is utilized to verify if a patient is hypertensive. The system will receive Degrees of Evidence input values from the graphs constructed in the previous example. The Proposition analyzed is “The patient is hypertensive”. The arterial pressure measurements found through the sphygmomanometer are: 135 mmHg 87 mmHg For these values, give the Paraconsistent Logical Value which will enter the Paraconsistent Analysis System as Degree of Evidence. Resolution: From Equation (5.1) we calculate the Paraconsistent Logical Value of the Degree of Evidence. For the Systolic arterial pressure measurement we have: X = 135 mmHg µ( x )SAP =
135 − 120 140 − 120
µ(x)SAP = 0.75 Answer: In relation to the Systolic arterial pressure measurement – SAP, the patient has a Degree of Evidence of 0.75 of having arterial hypertension. For the Diastolic arterial pressure measurement we have: X = 87 mmHg µ( x )DAP =
87 − 80 90 − 80
µ(x)DAP = 0.7 Answer: In relation to the Diastolic arterial pressure measurement - DAP, the patient has a Degree of Evidence of 0.7 of having arterial hypertension.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 5. Modeling of Paraconsistent Logical Signals
117
5.1.2.2 The Favorable Degree of Evidence
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Once the criteria of valorization of the Paraconsistent Logical Signal are established, each Information Source will produce a measurement using the units of its feature. The information signal originated from the source will undergo a normalization to be transformed into a Paraconsistent Logical Value in the closed interval of the real numbers [0,1]. The normalized signal will have its Paraconsistent Logical Value, and within the PAL2v concepts will be considered a Favorable Degree of Evidence to the Proposition that will be analyzed. --------------------------Example 5.7 According to medical sources, obesity is an important risk factor of arterial hypertension (AH). The World Health Organization suggests that obesity should be classified by body mass index (BMI). This index is obtained by dividing the individual’s body weight (Kg) by the square of their height (in meters). Suppose that one wishes to use the risk factor Obesity as input Evidence for the analysis of Hypertension through a PAN, which is used as a support tool for the diagnosis. Perform the modeling for the Proposition to obtain the Degree of Evidence for this risk factor, considering a linear variation in the Universe of Discourse and the following values: MAN BMI (Kg/m2) Underweight 40 Resolution: For the input of the Degrees of Evidences about this risk factor the Proposition: “The Patient has severe obesity” is analyzed. The values of the Degrees of Evidence in relation to this Proposition are obtained from World Health Organization official data, which classifies obesity in relation to BMI, through the above values. Considering linear variation, the graph of the Degree of Evidence in relation to the Risk Factor for obesity will be according to the following figure: Degrees of Evidence µ 1
Proposition P “The patient has severe Obesity”
.
0.5
µ (x ) =
X – 25 40 - 25 1 0
,
if x ∈ [25, 40] if x > 40 if x < 25
0 25
30 32.5 35
40
BMI
2 (kg/m )
-------------------------Example 5.8 Consider the measurement of an individual’s Body Mass Index of 37 (kg/m2). Determine the Favorable Degree of Evidence generated by this individual’s BMI measurement for the analysis of the Proposition “The patient has severe obesity”. Use the data from the previous exercise, for a PAL2v Paraconsistent Analysis System.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
118
Chapter 5. Modeling of Paraconsistent Logical Signals
Resolution: From equation (5.1) the Paraconsistent value of the Favorable Degree of Evidence is calculated by: 37 − 25 µ( x ) = µ(x) = 0.8 40 − 25
-----------------------------
5.1.2.3 The Unfavorable Degree of Evidence In the applications of PAL2v, all the Information Sources may generate only Favorable Degrees of Evidence, and their transformation into Unfavorable Degrees of Evidence will be done in the PAN itself, through its algorithm. However, in some situations the Unfavorable Degree of Evidence may be obtained from the modeling carried out in knowledge mining.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
5.1.2.3.1 The Unfavorable Degree of Evidence originated from Modeling Any Information Source that deals with the same feature may use the valorization chosen in the project to generate the Resultant Degree of Evidence. In PAL2v analysis there is the need to transform one of the information sources into Unfavorable Degree of Evidence generator. To generate the Unfavorable Degree of Evidence the values for this source may be complemented in the Evidence modeling itself. -----------------------------Example 5.9 Suppose the Paraconsistent Analysis System of the previous example received another Body Mass Index measurement concerning the same patient. This new methodology takes into account other parts of the body. A different Body Mass Index value was obtained. Make the modeling considering this second measurement as an Unfavorable Degree of Evidence value in the Paraconsistent Analysis System. Resolution: The Universe of Discourse has already been established when the first measurement was analyzed. For this second Information Source we will use the same. The graph of the variation of the Degree of Evidence will now be represented by its complement according to the following figure: Degrees of Evidence µ 1
Proposition P “The patient has severe Obesity”
.
0.5
µ (x ) =
X – 40 25 - 40 0 1
,
if x ∈ [25, 40] if x > 40 if x < 25
0 25
30 32.5 35
40
BMI
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
2 (kg/m )
Chapter 5. Modeling of Paraconsistent Logical Signals
119
Using the data from the previous example, determine the Unfavorable Degree of Evidence generated by this patient’s BMI measurement for the analysis of the Proposition “The patient has severe obesity”, in a PAL2v Paraconsistent Analysis System. Resolution: From equation (5.2) the Paraconsistent value of the Unfavorable Degree of Evidence is calculated by: 37 − 40 µ( x ) = µ(x) = 0.2 25 − 40 ---------------------------The previous example resulted in a Paraconsistent value of the expected Unfavorable Degree of Evidence as seen before: λ = 1- μ Therefore, for the Unfavorable Degree of Evidence, the vertical axis of the previous graph will have the values of Unfavorable Degree of Evidence λ. 5.1.2.3.2 Obtaining the Evidence Signal
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
According to what we saw, in the Paraconsistent Annotated Logic with annotation of two values (PAL2v) the annotation is considered the ordered pair composed by the values of the Favorable Degrees of Evidence and the Unfavorable Degree of Evidence. Therefore, to form the annotation, one value will be the Paraconsistent Logical Value of the Favorable Degree of Evidence and the other value will be the Paraconsistent Logical Value of the Unfavorable Degree of Evidence. The sources of these Degrees of Evidence belong to the Uncertain Knowledge, when mined they may present inconsistencies. Thus, the Paraconsistent Analysis System must initially perform the modeling of the values, and afterwards, utilizing the PAL2v methodology, perform the treatment. ---------------------------Example 5.10 Consider 36.7 (kg/m2) as the first Body Mass Index measurement of a patient. Afterwards, by utilizing different methods the value of 33.9 (kg/m 2) was found in the same patient. Suppose these measurements will be considered as two information sources to be used in a Paraconsistent Analysis System that analyzes the Proposition “The patient has severe obesity”. Using the graphs from the previous example determine: a) The value of the Favorable Degree of Evidence generated by this patient’s BMI measurement. b) The value of the Unfavorable Degree of Evidence generated by this patient’s BMI measurement. c) Present the notation of the Proposition for the analysis that will be done by the PAL2v Paraconsistent System. d) Determine the Resultant Real Degree of Evidence, and its corresponding Interval of Evidence. Resolution: a) From equation (5.1) the Paraconsistent value of the Favorable Degree of Evidence is calculated by: 36.7 − 25 µ( x ) = 40 − 25 µ (x) = 0.78
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
120
Chapter 5. Modeling of Paraconsistent Logical Signals
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
b) From equation (5.2) the Paraconsistent value of the Unfavorable Degree of Evidence is calculated by: 33.9 − 40 µ( x ) = 25 − 40 µ(x ) = 0.40666 Therefore: λ = 0.40666 c) The annotation of Proposition P “The patient has severe obesity” will be: (μ, λ) = (0.78, 0.40666) d) From equation (2.2) we calculate the Degree of Certainty: DC = 0.78 – 0.40666 DC = 0.37334 From equation (2.3) we calculate the Degree of Contradiction Dct = (0.78 + 0.40666) - 1 Dct = 0.18666 As DC > 0 we calculate the Real Degree of Certainty through equation (3.21): DCR = 1 − (1− | 0.37334 |)2 + 0.186662 D CR = 1 - 0.6538629 D CR = 0.3461309678 From the value of the Real Degree of Certainty we calculate the value of the Real Degree of Evidence through equation (4.3): 0.3461309678 + 1 µER = 2 µER = 0.67306548 The value of the Normalized Degree of Contradiction is calculated through equation (4.5): 0.78 + 0.40666 µctr = 2 µctr = 0.59333 The value of the Resultant Interval of Evidence is calculated through equation (4.7) φE = 1- |2 x 0.59333 – 1 | φE = 0.81334 ------------------------
5.2 Treatment of Contradiction in the Modeling of the Evidence Signals Since the information, which will be generator of the Degrees of Evidence input signals for the PAL2v analysis, is mined from Uncertain Knowledge, Contradiction may be generated in the modeling process of the Evidence signals. If there are two or more Experts modeling the same information source, the values chosen to establish the Universe of Discourse, even the variation function may be different, generating Contradiction in the signals that will be mined. An Analysis System that utilizes the PAL2v methodology is able to detect and treat the signals adequately, with the objective of eliminating the Contradiction effect from them. --------------------------
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 5. Modeling of Paraconsistent Logical Signals
121
Example 5.11 Suppose two experts received the assignment to model Knowledge mining to build the graphs and obtain the logical values concerning the Degrees of Evidence for the Proposition “The patient has severe obesity”. Expert E1 worked independently and considered the linguistic variable “The Body Mass Index is high” from the World Health Organization table. Thus, he established values with linear variation directly proportional of BMI for the Universe of Discourse from 25 to 40 (kg/m2). Expert 2 also worked independently and preferred to use a different methodology, with the purpose of obtaining better precision. For the linguistic variable” The Body Mass Index is high” he established BMI values for the Universe of Discourse from 20 to 40 (kg/m2) with a non-linear variation in the form of function S. For an individual’s Body Mass Index measurement of 35.3 (kg/m2): a) Construct the graphs of the two Experts and indicate the value found with their corresponding Degrees of Evidences. b) Calculate the values concerning the Favorable Degrees of Evidence of the two sources. c) Determine the Resultant Real Degree of Evidence value with the corresponding Interval of Evidence, after the analysis of the sources. Resolution: a) For Expert 1 the graph of the Evidence signals modeling will be:
Degrees of Evidence µ 1
Proposition P “The patient has severe Obesity”
μE1= 0.75 .
0.5
µ (x ) =
X – 25 40 - 25 1 0
,
if x ∈ [25, 40] if x > 40 if x < 25
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
0 25
30 32.5
35
40
BMI
2 (kg/m )
For Expert 2 the graph of the Evidence signals modeling will be: Degrees of Evidence µ 1
Proposition P “The patient has severe Obesity”
μE2 .
0.5
µ (x ) =
0 if x < 20 2 X – 20 2 if 20 ≤ x ≤ 30 40 - 20 1 - 2 X – 20 2 if 30 < x ≤ 40 40 -20 1 if x > 40
0 25
30 32.5
35
40
BMI
2 (kg/m )
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
122
Chapter 5. Modeling of Paraconsistent Logical Signals
Degree of Evidence from Expert 1: µ ( x ) E1 =
35.3 − 25 40 − 25
µ(x)E1 = 0.686666 b) From equation (5.3) we calculate the Paraconsistent Logical Value of the Degree of Evidence from Expert 2: ⎛ 35.3 − 40 ⎞ µ( x ) E 2 = 1 − 2 ⎜ ⎟ ⎝ 40 − 20 ⎠
2
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
µ (x)E2 = 0.88955 c) Considering the modeling done by Expert 2 as the Paraconsistent Logical Value of the Unfavorable Degree of Evidence we have: λ = 1- 0.88955 λ = 0.11045 The annotation of Proposition P “The patient has severe obesity” becomes: (μ, λ) = (0.686666, 0.11045) d) From equation (2.2) we calculate the Degree of Certainty: DC = 0.686666 – 0.11045 DC = 0.576216 From equation (2.3) we calculate the Degree of Contradiction Dct = (0.686666 + 0.11045) - 1 Dct = - 0,202884 As DC > 0 we calculate the Real Degree of Certainty through equation (3.21): DCR = 1 − (1− | 0.576216 |)2 + 0.2028842 D CR = 1 - 0.4698455 D CR = 0.5301544976 From the value of the Real Degree of Certainty we calculate the value of the Real Degree of Evidence through equation (4.3): 0.5301544976 + 1 µER = 2 µ ER = 0.7650772488 The value of the Normalized Degree of Contradiction is calculated through equation (4.5): 0.686666 + 0.11045 µctr = 2 µ ctr = 0.398558 The value of the Resultant Interval of Evidence is calculated through equation (4.7) φE = 1- |2 x 0.398558 – 1 | φE = 0.797116 --------------------------
5.3 Final Remarks In this chapter we presented the techniques for modeling signals that will be applied as inputs of Paraconsistent Analysis Nodes (PANs).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 5. Modeling of Paraconsistent Logical Signals
123
In the following chapters we will see that these Paraconsistent Analyses Nodes or Systems (PANs) will be interconnected forming Information Treatment and Analysis Networks originated from Uncertain Knowledge database. According to the application methodology of the Paraconsistent Annotated Logic with annotation of two values (PAL2v) the Paraconsistent logical signals are valued in the closed interval of the real numbers between 0 and 1 and represent information for analysis. In this chapter we presented a few functions that represent the variations of the values of the Degrees of Evidence in the Universe of Discourse. In the Paraconsistent Analysis Networks the values that represent information, which will be used to feed the Analysis System (PANs), may be presented in several ways other than the ones studied. Therefore, according to what was seen in the examples in Paraconsistent Analysis Network projects, the variations of the Degrees of Evidence values and the limits of the Universe of Discourse depend on the Feature that will be referred to as information, as well as its variation characteristics in the considered interval. Signal treatment networks that use PANs to analyze information are able to dilute the conflicts through the use of the methodology presented, and thus offer an adequate answer to real-world situations. The projects that utilize PANs may carry out information treatment through Paraconsistent Analysis Networks without using signal adjustments, neither weights that may leave the processing dependent on external factors. Thus, Analysis Networks that utilize PAL2v application methodology for Uncertainty treatment will not generate high level of complexity which might restrict a conclusion. This capacity of analyzing contradictory signals, even the most conflicting, is one of the reasons for the great potentiality of PAL2v for applications in several human Knowledge fields.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Exercises 5.1 Describe why the arterial pressure measurement with a single sphygmomanometer M1 does not contain Contradiction. 5.2 How does PAL2v consider arterial pressure measurement values carried out with two different devices in the same patient? 5.3 Suppose a patient’s medical report receives three arterial pressure measurements M1, M2 and M3. For this situation we ask: How are these values considered in the application of PAL2v in paraconsistent analysis networks? 5.4 According to medical literature a patient is considered moderately hypertensive if Systolic pressure is above 130 mmHg and/or Diastolic pressure above 80 mmHg. A patient with excellent arterial pressure is the one whose measurement is 120 mmHg SAP and 80 mmHg - DAP. Develop a Paraconsistent Logical Value Graph when an Expert affirms that: “A patient is considered Moderately Hypertensive when their arterial pressure measurement is relatively high”. 5.5 According to medical literature a patient is considered Hypertensive if their Systolic pressure is above 140 mmHg and/or Diastolic pressure is above 90 mmHg. A patient with excellent arterial pressure is the one whose measurement is 120 mmHg SAP and 80 mmHg DAP. Project a Paraconsistent Analysis System with Degrees of Evidence as input in respect to the Proposition: “The patient is hypertensive” Project the valorization graph with the variation in the Universe of Discourse according to function S of the Degrees of Evidence for this Proposition.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
124
Chapter 5. Modeling of Paraconsistent Logical Signals
5.6 Consider a fictitious case where in a preliminary examination a Paraconsistent Analysis System is utilized to verify whether a patient is hypertensive. The system will be fed by Degrees of Evidence values from the graphs constructed in the previous exercise. The Proposition analyzed is “The patient is hypertensive”. The arterial pressure measurements found through the sphygmomanometer are: 130 mmHg 84 mmHg For these measurement values give the Paraconsistent Logical Value that will enter the Paraconsistent Analysis System as Degree of Evidence to. 5.7 Consider a fictitious case where in preliminary examination a Paraconsistent Analysis System is utilized to verify whether a patient is hypertensive. It will be fed by Degrees of Evidence values from the graphs constructed in exercise 5.5. The Proposition analyzed is “The patient is hypertensive”. The arterial pressure measurements found through the sphygmomanometer are: 125 mmHg 86 mmHg For these measurement values give the Paraconsistent Logical Value that will enter the Paraconsistent Analysis System as Degree of Evidence. 5.8 It is known from medical sources that obesity is an important risk factor for arterial hypertension (AH). The World Health Organization suggests that obesity be classified the Body Mass Index (BMI). This index is obtained by dividing the individual’s weight (Kg) by the square of their height (in meters). Considered that the risk factor Obesity as Evidence for the analysis of Hypertension where a Paraconsistent Analysis System is used as tool to help the diagnosis. Perform the modeling for the Proposition considering each interval exposed in the table below to obtain the Degree of Evidence for this factor. The variation of the Degrees of Evidence in the Universe of Discourse will be according to function S.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
MALE Below weight Normal Mild Moderate Severe
40
5.9 Consider a patient’s Body Mass Index measurement is 37 (kg/m2). Determine for a PAL2v Paraconsistent Analysis System, the Favorable Degree of Evidence generated by this patient’s BMI measurement for the analysis of the Proposition “The Patient has moderate obesity” 5.10 Consider a patient’s Body Mass Index measurement is 37 (kg/m2). Determine for a PAL2v Paraconsistent Analysis System, the Favorable Degree of Evidence generated by this patient’s BMI measurement for the analysis of the Proposition “The Patient has mild obesity”. 5.11 Consider a patient’s Body Mass Index measurement is 37 (kg/m2). Determine for a PAL2v Paraconsistent Analysis System, the Favorable Degree of Evidence generated by this patient’s BMI measurement for the analysis of the Proposition “The patient has severe obesity”. 5.12 Consider the Paraconsistent Analysis System of the previous example fed with another Body Mass Index measurement obtained from the same patient. By using a different methodology, in different parts of the body, we may find another value of the
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Chapter 5. Modeling of Paraconsistent Logical Signals
125
Body Mass Index. Project a new Analysis System so that this second measurement is considered as Unfavorable Degree of Evidence value in the Paraconsistent Analysis System. 5.13 Consider a patient’s second Body Mass Index measurement 38.5 (kg/m2). Determine the Unfavorable Degree of Evidence generated by this patient’s BMI measurement for the analysis of the Proposition “The Patient has moderate obesity” in a PAL2v Paraconsistent Analysis System. 5.14 Consider a patient’s first Body Mass Index measurement of 37.7 (kg/m2). Later on, using different methods another measurement was obtained 39.9 (kg/m2). Bear in mind that these measurements will be considered as two information sources from the previous examples and will be used in the Paraconsistent Analysis System that analyzes the Proposition “The patient has severe obesity”. Utilizing the modeling from example 5.10, determine: a) The value of the Favorable Degree of Evidence generated by this patient’s BMI measurement. b) The value of the Unfavorable Degree of Evidence generated by this patient’s BMI measurement. c) Present the annotation of the Proposition generated for the analysis in the PAL2v Paraconsistent System. d) Determine the real Degree of Evidence and its corresponding Interval of Evidence. 5.15 Consider two Experts receive the assignment to model Knowledge mining and build the graphs to obtain the logical values that refer to Degrees of Evidence for the Proposition “The patient has severe obesity”. By working independently, Expert E1 considered, for the linguistic variable “The Body Mass Index is high” values with linear variation directly proportional of BMI for the Universe of Discourse from 23 to 48 (kg/m2). Also, by working independently, Expert E2 preferred to use different methodology with the purpose of achieving better precision. For the linguistic variable “The Body Mass Index is high” established values of BMI for the Universe of Discourse from 22 to 45 (kg/m2) with a non- linear variation in the form of function S. For a patient’s Body Mass Index measurement of BMI = 37.3 (kg/m2) we ask: a) Construct the two Experts’ graph and indicate the value found with its corresponding Degrees of Evidences. b) Calculate the values that refer to the Favorable Degrees of Evidence of the two sources. c) Determine the value of the Resultant Real Degree of Evidence with the corresponding Interval of Evidence after the analysis of the sources. 5.16 Two Experts were assigned the modeling of knowledge mining to build the graphs to obtain the logical values that refer to the Degrees of Evidence for the Proposition “The patient has severe obesity”. Working independently Expert E1 considered for the linguistic variable” The Body Mass Index is high”, values with BMI variation according to function S for the Universe of Discourse from 22 to 50 (kg/m2). Also, working independently, Expert E2 also used function S for the same linguistic variable “The Body Mass Index is high”. He just established different values of BMI for the Universe of Discourse from 20 to 55 (kg/m2). For a patient’s Body Mass Index measurement of 38.3 (kg/m2) we ask: a) Construct the two Experts’ graphs and indicate the value found with their corresponding Degrees of Evidences.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
126
Chapter 5. Modeling of Paraconsistent Logical Signals
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
b) Calculate the values that refer to the Favorable Degrees of Evidence of the two sources. c) Determine the value of the Resultant Real Degree of Evidence with the corresponding Interval of Evidence after the analysis of the sources. 5.17 Two Experts were assigned the modeling of knowledge mining to build the graphs to obtain the logical values that refer to the Degrees of Evidence for the Proposition “The Patient is obese”. Working independently, Expert E1 considered for the linguistic variable” The Body Mass Index is high”, values with variation directly proportional to BMI for the Universe of Discourse from 25 to 58 (kg/m2). Also, working independently, Expert E2 used variation directly proportional to the same linguistic variable “The Body Mass Index is high”. He just established different values of BMI for the Universe of Discourse from 20 to 57 (kg/m 2). For a patient’s Body Mass Index measurement - BMI = 37.9 (kg/m2) we ask: a) Construct the two Experts’ graph and indicate the value found with the corresponding Degrees of Evidences. b) Calculate the values that refer to the Favorable Degrees of Evidence of the two sources. c) Determine the value of the Real Degree of Evidence with the corresponding Interval of Evidence after the resulting analysis of the sources.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
127
CHAPTER 6
Paraconsistent Analysis Network for Uncertainty Treatment Introduction In this chapter we present some decision network configurations that utilize the PAL2v methodology studied in the previous chapters. Paraconsistent Analysis Network for Uncertainty Treatment are several Systems, or Paraconsistent Analysis Nodes, conveniently interconnected to analyze evidences that come from uncertain sources. Even with conflicting information, this Paraconsistent Analysis Network of the Paraconsistent Annotated Logic with annotation of two values (PANet) is able to supply enough results to set off an action. Utilizing these criteria, through the concepts of PAL2v approached previously, an intense study is done in a special configuration of Paraconsistent Analysis Network composed of interconnections among the Paraconsistent Analysis Nodes (PANs). This special configuration, which is called Paraconsistent Analyzer Cube, is an algorithm especially projected to promote the modeling of a three-dimensional paraconsistent analysis by means of Degrees of Contradiction values existing in certain points of the network.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
6.1 Paraconsistent Analysis Network (PANet) Paraconsistent Analysis Networks for Uncertainty Treatment are composed of interconnected Paraconsistent Analysis Nodes (PANs), where the analysis of a unique proposition is carried out. Supposing that there exists an Object Proposition, and the analysis of several propositions is necessary to make a decision about it, each PAN will take turn to analyze a unique partial proposition. Therefore, to obtain enough values to make a decision about the Object Proposition, the Paraconsistent Analysis result produced in each PAN is combined with the results from the other PANs. These result combinations establish a determined Degree of Certainty to the Object Proposition, which is the final goal of the analysis carried out by the Network. Figure 6.1 shows two PANs carrying out analysis of two partial propositions P1 and P2, interconnected in the Paraconsistent Analysis Network for the analysis of an Object Proposition Po. From the definition of the Proposition, the Degrees of Evidence, which will compose the annotation of the Paraconsistent Logical Signal, will be considered favorable, symbolized by letter μ, or unfavorable, symbolized by letter λ.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
128
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
λ1
P1 DCR1 φ1 Dct1
μ1 μ2
P2 DCR2
λ2
PAL2v
Po
LOGIC AND ARITHMETICS
DCR φR Dct
φ2 Dct2
Figure 6.1 Symbolic Representation of a Paraconsistent Analysis Network composed of two Analysis Nodes (PANs).
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
6.1.1 Rules for Paraconsistent Analysis Network The Paraconsistent Analysis Network for Uncertainty Treatment aims at the analysis of a final proposition, which in turn needs the results of the analysis of several other propositions carried out in the PANs. Therefore, the PANs will do the analysis of several propositions P1, P2,…Pn , which, when combined, contribute to the analysis of the Object Proposition Po. Three basic rules may be considered to make the combinations of the results in a Paraconsistent Analysis Network: 1- Propositions analyzed in the PANs may be logically combined through the Degrees of Resultant Real Certainty originated from the analysis, and thus make the different interconnections in the Paraconsistent Analysis Network. 2- The values of the Degrees of Resultant Real Certainty, as well as the Intervals of Real Certainty originated from the PANs, referring to the different propositions, may be logically treated by the conjunction (AND) and disjunction (OR) or algebraically by addition and subtraction of their values, according to the characteristics and topology of the Paraconsistent Analysis Network project. 3- The values of the Degrees of Resultant Real Certainty may be transformed, by normalization, into values between 0 and 1, in the real number interval; and thus considered as Degrees of Evidence of other propositions which are being analyzed by other different PANs. In this way the interconnections among the PANs will be done through the analysis of the evidences.
6.1.2 Basic Configuration of a Paraconsistent Analysis Network According to the necessity for greater or lesser precision in the answers, and the kind of available information sources considered relevant for decision making, a Paraconsistent Analysis Network may be configured by interconnecting its PANs in a number of ways.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
129
We know that each Paraconsistent Analysis Node (PAN) is able to receive n values of Degrees of Evidences and these values are considered as inputs that represent favorable or unfavorable evidences. Thus, in a basic configuration, the preliminary analysis in a PAN produces a value of Real Degree of Certainty D CR and a Signaled Interval of Certainty φ(±) referring to a unique Proposition. A Paraconsistent Decision Network (PANet) must be modeled in such a way that the many values of the Degrees of Evidence, in respect to a certain Proposition analyzed by the PAN, will produce a single value of Evidence, which will enter a posterior analysis as a value of Favorable Degree of Evidence or, be considered a value of Unfavorable Degree of Evidence through a complementation. Figure 6.2 shows a network model that carries out Paraconsistent Analysis by transforming Real Degrees of Certainty into Degrees of Evidence interconnecting three PANs. The output Object Proposition Po is analyzed by the evidences from the paraconsistent analysis of Propositions P1 and P2, carried out by the PANs. Analysis of Proposition P1 n Favorable Evidences μn n Unfavorable Evidences λn
Input Evidences
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
n Favorable Evidences μn n Unfavorable Evidences λn
PAL2v Analysis
DCR φ(±)
Favorable Evidence μ1
Analysis of Proposition PO
PAL2v Analysis μER
Analysis of Proposition P2
φ(±)
DCR φ(±)
PAL2v Analysis Unfavorable Evidence λ = 1 - μ2
DCR φ(±)
Favorable Evidence μ2
Figure 6.2 Representation of a modeling where the Paraconsistent Analysis Nodes outputs PANs are Degrees of Evidences for an Object Proposition.
Proposition P1, analyzed in PAN1, produces a Favorable Degree of Evidence μER1 and Proposition P2, analyzed in PAN2, produces another Degree of Evidence μER2, which will be transformed into Unfavorable Degree of Evidence λ for an Object Proposition after a complementation. In these interconnections, each PAN has an output value of the Real Degree of Certainty DCR, which must be transformed, by normalization, into Degree of Evidence μER. Thus, the result will then be used as input signal in other PANs, to carry out analysis of other propositions in the decision Network. In each analysis carried out by the PANs, the Interval of Certainty is available, besides the Resultant Degree of Evidence. These two values will allow an identification of the condition for analysis of uncertainty of each proposition. With the value of the
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
130
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
Interval of Certainty there exists an indication of where the system may react. Whether reduce or strengthen the evidences with the objective of diminishing the conflicts and increasing the Degree of Certainty.
6.1.3 Paraconsistent Analysis Network Algorithms and Topologies With the normalization, both the input and output signal values remain in the real closed interval between 0 and 1. This allows the results of the analysis done in the Nodes to be utilized as Degrees of Evidence for other interconnected propositions. Thus, different topologies may be constructed for Paraconsistent Analysis Decision Networks-PANet. These topologies will be created according to the purpose of the analysis, taking into account the necessities, or not, for control of the evidences and of the information sources available.
6.1.3.1 Paraconsistent Analysis Network in Simple Configuration A Paraconsistent Analysis Network may be constructed in such a way that each Paraconsistent Analysis Node - PAN carries out the treatment of the evidences for a single Proposition, and produces a Degree of Evidence as result for another proposition. Thus, each value of Real Degree of Certainty DCR obtained in each PAN must be transformed, by means of normalization, into Resultant Degree of Evidence μER. With their values between 0.0 and 1.0, the Resultant Degrees of Evidence will then be utilized as input signal in other PANs for the analysis of other propositions in the Paraconsistent Decision Network. Such modeling, we call Paraconsistent Analysis Network in Simple Configuration, may be seen in figure 6.3.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ1
Proposition P1 PAL2v
μER1 ΦE
λ1
Proposition PO
PAN 1 PAL2v
μ2
ΦE
Proposition P2 PAL2v
μER
μER2 ΦE
PAN 3
λ = 1- μER2
λ2 PAN 2 Figure 6.3 Simple Configuration Paraconsistent Analysis Network (PANet.)
The normalization equation to obtain the Resultant Degree of Evidence μER is added to each algorithm of the PAN. Thus, a Proposition P1 analyzed in PAN1,
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
131
produces a Favorable Degree of Evidence μER1, and a Proposition P2, analyzed in PAN2, produces another Favorable Degree of Evidence μER2. The following process is the choice of a Resultant Degree of Evidence which will be considered representative of Unfavorable Evidence. In this case the Resultant Degree of Evidence of Proposition 2 was chosen to be complemented, to be transformed into Unfavorable Degree of Evidence λ and analyzed in the final PAN concerning the Object Proposition.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
6.1.3.2 Simple Configuration Paraconsistent Analysis Network and the disabling of PANs Since the Analysis Network for Decision Making considers a Paraconsistent Analysis Node (PAN) as an information source that generates Degrees of Evidence, this will always influence the result of the Paraconsistent Analysis. Even if it produces a Resultant Degree of Evidence of 0.5, which means Indefinition for PAL2v, this value, when analyzed in the PAN of the Object Proposition, will influence the result, increasing or decreasing the final Resultant Degree of Evidence. The interpretation of this permanent performance of the PAN is that, in Uncertainty treatment using PAL2v, even an undefined information source must be considered in the analysis process, since this source may have the state of Resultant Degree of Evidence equal to 0.5 due to high contradiction. It is known from the fundamentals of Paraconsistent Logic that contradictory information must not be ignored and thus must be controlled through variations of the input evidences. In paraconsistent analysis the Interval of Resultant Evidence φ E in the PAN, which presents Indefinition, will indicate whether the output value is undefined due to high contradiction. In this case the Resultant Interval of Evidence φ E has a low value, between 0.25 and 0. In some occasions, the information sources of the PAN may be disconnected and therefore, their evidence values, which signal Indefinition, should not be added to the analysis, for they contaminate the final result. To avoid that these sources bring information that influence the analysis process, the indication of the Interval of Evidence φE is used to disable them. Thus, a network that considers the disabling of PANs due to Indefinition, must receive an Interval of Evidence signal from the posterior PAN to indicate if the Degree of Evidence coming from the PAN is low, due to high contradiction, or simply due to lack of evidences, which signals that the information sources are disconnected. Figure 6.4 presents a simple configuration of Paraconsistent Analysis Network with disabling of PANs due to evidence Indefinition. In this case the Paraconsistent Analysis Node receives information about the value of the Interval of Evidence φ E from the previous PAN, and this in turn, sends the value of its Resultant Interval of Evidence to the next PAN. In the algorithm, the disabling of the PAN equals the undefined signal to the value of a complementary source. To disable the Degree of Evidence that brings Indefinition, one first verifies the value of the previous Resultant Interval of Evidence φpre. If the value of the Resultant Interval of Evidence φ pre from the PAN that originated it is lower than 0.25 and the Degree of Evidence that gets to the PAN is equal to 0.5 the disabling will be done. Under these conditions the disabling is done as follows: a) The Evidence source that is bringing Indefinition is identified;
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
132
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
b) If the Indefinition with the previous Resultant Interval of Evidence φpre=1 is in the Favorable Evidence μ.
μ1
Proposition P1 φμ1 PAL2v
Φ
φλ2
λ1
μER1
PAN 1 Proposition PO φ1 PAL2v
ΦER
φ2
μ2
PAL2v
λ = 1- μER2 μER2 Φ
φλ2
λ2
PAN 3
Proposition P2
φμ2
μER
PAN 2
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 6.4 Simple Paraconsistent Analysis Network with disabling of PAN due to Indefinition in the evidences
In condition b a Complementation is done in the value of the Unfavorable Degree of Evidence λ to compute the value of the Activated Favorable Degree of Evidence μact: (6.1) μ act = 1 - λ where: μact = Activated Favorable Degree of Evidence; λ = Unfavorable Degree of Evidence of a different value from 0,5. c) The value of the Activated Unfavorable Degree of Evidence λact is considered as being the same as the input, by doing. (6.2) λ act = λ where: λact = Activated Unfavorable Degree of Evidence; λ = Unfavorable Degree of Evidence. d) From equation (4.2) we compute the value of the Resultant Degree of Evidence in the PAN by: (μ - λ ) + 1 μ E1 = act act (6.3) 2 where: μE1= Resultant Degree of Evidence μact = Activated Favorable Degree of Evidence λact = Activated Favorable Degree of Evidence
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
133
The value of the Resultant Degree of Evidence obtained through the use of this equation is as if the unique value that is being applied in the PAN came from the source that is supplying the Unfavorable Degree of Evidence λ. e) If the Indefinition with the previous Resultant Interval of Evidence φpre=1 is in the Unfavorable Evidence λ, a complementation is done in the value of the Favorable Degree of Evidence μ to compute the value of the Activated Unfavorable Degree of Evidence λact: λ act = 1 - μ (6.4) where: λact = Activated Favorable Degree of Evidence μ = Favorable Degree of Evidence of a different value from 0,5. f) The value of the Activated Favorable Degree of Evidence μact is considered as being the same as the input, by doing. μact = μ (6.5) where: μact = Activated Favorable Degree of Evidence μ = Favorable Degree of Evidence.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
g) From equation (6.3) reproduced below, we compute the value of the Resultant Degree of Evidence in the PAN: (μ - λ ) + 1 μ E1 = act act 2 where: μE1 = Resultant Degree of Evidence μact = Activated Favorable Degree of Evidence λact = Activated Unfavorable Degree of Evidence The value of the Resultant Degree of Evidence obtained through the use of this equation is as if the unique value that is being applied in the PAN came from the source that is supplying the Unfavorable Degree of Evidence λ. -------------------------Example 6.1 Consider a PAN receives two input evidence signals: Information source 1 μ1 = 0.5 with Interval of Evidence φE1 = 1 Information source 2 μ2 = 0.72 with Interval of Evidence φE2 = 0.93 Check if there are sources with values of Indefinition; in case there are, perform the disabling of the undefined source and present the Resultant Degree of Evidence obtained from the PAN. Resolution: Suppose information source 2 generates Unfavorable Degree of Evidence, therefore: λ = 1- μ 2 λ = 1- 0.72 Unfavorable Degree of Evidence λ = 0.28 Therefore the input Degrees of Evidences in the PAN are: μ 1 = 0.5 λ = 0.28 Since the Interval of Evidence from source 1 is φ E1 = 1 then:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
134
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ 1 = 0.5 is Undefined λ ≠ 0.5 is Activated From equation (6.1) we find the Activated Favorable Degree of Evidence: μ act = 1- 0.28 μ act = 0.72 From equation (6.2) we determine the Activated Unfavorable Degree of Evidence: λ act = 0.28 From equation (6.3) we compute the Resultant Degree of Evidence: (0.72 - 0.28) + 1 μ E1 = 2 μE1 = 0.72 The result shows that for the next PANs only the value of information source 2 will be considered. --------------------------Example 6.2 Suppose a PAN receives two input evidence signals: Information source1 μ1 = 0.77 with Interval of Evidence φE1 = 0.83 Information source 2 μ2 = 0.5 with Interval of Evidence φE2 = 1 Check if there are sources with values of Indefinition; and in case it confirms, perform the disabling of the undefined source and present the Resultant Degree of Evidence from PAN. Resolution: Information source 2 is considered as the generator of the Unfavorable Degree of Evidence, therefore, do: λ = 1- μ 2 λ = 1- 0.5 Unfavorable Degree of Evidence λ = 0.5 Therefore the input Degrees of Evidences in the PAN are: μ 1 = 0.77 λ = 0.5 Since the Interval of Evidence of source 2 is φ E2 = 1 then: μ 1 ≠ 0.5 is Activated λ = 0.5 is Undefined From equation (6.4) we find the Activated Unfavorable Degree of Evidence: λ act = 1- 0.77 λ act = 0.23 From equation (6.5) we determine the Activated Favorable Degree of Evidence: μ act = 0.77 From equation (6.3) we compute the Resultant Degree of Evidence: (0.77 - 0.23) + 1 μ E1 = 2 μE1 = 0.77 The result shows that for the next PANs only the value of information source 1 will be considered. -------------------------
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
135
The Paraconsistent Analysis Algorithm with disabling due to Indefinition is presented as follows.
6.1.4 PAL2v Paraconsistent Analysis Algorithm with Disabling of the PANs due to Indefinition
μ
PAL2v Analysis
φμpre λ
DCR φ(±)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
φλpre. 1. Enter the Input values μ */ Favorable Degree of Evidence 0 ≤ μ ≤ 1 λ */ Unfavorable Degree of Evidence 0 ≤ λ ≤ 1 */ previous Resultant Interval of Evidence from PAN of Favorable φEμpre Evidence 0 ≤ φ Eμpre ≤ 1 φEλpre */ previous Resultant Interval of Evidence from PAN of Unfavorable Evidence 0 ≤ φ Eλpre ≤ 1 2. For φEμpre = 1 If μ = 0.5 Then do: λact = 1 - λ μ act = λ and 3. Compute the Resultant Degree of Evidence (μ - λ ) + 1 μ ER = act act Go to item 13 2 4. For φEλpre = 1 If λ= 0.5 Then do: μ act = μ and λact = 1 - μ 5. Compute the Resultant Degree of Evidence (μ - λ ) + 1 μ ER = act act Go to item 13 2 6. Compute the Degree of Contradiction Dct = (µ + λ) - 1 7. Compute the Interval of Evidence φE = 1- |Dct | 8. Compute the Degree of Certainty DC= µ - λ 9. Compute distance d d = (1− | DC |) 2 + Dct 2 10. Determine the Output signal If φE ≤ 0,25 25 or d ≥1 Then Do: S1= 0.5 and S2= φE: Indefinition and Go to item 15 Else go to the next item
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
136
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
11. Determine the Real Degree of Certainty DCR = (1 - d) If DC > 0 DCR = (d - 1) If DC < 0 12. Compute the Degree of Resultant Real Evidence D +1 µER = CR 2 13. Determine the signal of the Interval of Resultant Evidence If µ+ λ > 1 Signal positive φE(±) = φE(+) If µ+ λ < 1 Signal negative φE(±) = φE(-) φE(±) = φE(0) If µ+ λ = 1 Signal zero 14. Present the Output results Do S1 = μER and S2= φE(±) 15. End
6.2 Three-dimensional Paraconsistent Analysis Network According to what we saw, a Paraconsistent Network for the Treatment of Uncertainties is composed of algorithms originated from the interpretation of PAL2v. Each PAL2v algorithm forms what was defined as Paraconsistent Analysis Node (PAN) which receives Degrees of Evidences and uses its equations to obtain the Degrees of Certainty, of Contradiction, and the Interval of Certainty. A typical PAN with its main equations aforementioned is presented as follows.
μ1
Proposition P PAL2v
μER
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
λ1 Inputs φE(±) μ Degree of Favorable Evidence λ Degree of Unfavorable Evidence
Outputs φE = Interval of Certainty μER = Resultant Real Degree of Evidence
Figure 6.5 A Paraconsistent Analysis Node - PAN
The value of the output Degree of Resultant Real Evidence in the PAN is computed by the equations reproduced below: μER = Resultant Real Degree of Evidence computed by equation (4.3): D +1 µER = CR 2 Where: DCR = Real Degree of Certainty obtained through equations (3.20) and (3.21). φE(±) = Signaled Interval of Evidence obtained through equation (4.7) reproduced below.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
137
φE = 1- |2μctr – 1| Considering: μctr = Normalized Degree of Contradiction obtained through equation (4.5). With the signals: φ E(-) if μctr < 0.5 φ E(+) if μctr > 0.5 With these results a PAN may be connected to other PANs, forming a Decision Network to estimate values and perform analysis of information in different conditions. A special configuration with interconnected PANs will be studied next.
6.2.1 Paraconsistent Analyzer Cube Let’s consider a special case where it’s known initially, that there exists high contradiction attributed to certain evidences of a proposition A, this weakens the evidences to other proposition B. In this case, a special configuration is used, where the output represented by the Interval of Evidence φE of a Paraconsistent Analysis Node (PAN) controls the analysis carried out by another PAN. This configuration, where a three-dimensional analysis is done, is called Paraconsistent Analyzer Cube, and is summarized in figure 6.6.
Paraconsistent Analysis Node Proposition A
μ2
φEA
Proposition PP2B B
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
PAL2v
μER φEB
λ2
Paraconsistent Analyzer Cube
Figure 6.6 Paraconsistent Analysis Network with a Paraconsistent Analyzer Cube
6.2.2
Construction of a Paraconsistent Analyzer Cube
An Analyzer Cube for the treatment of contradiction performs a Threedimensional Paraconsistent Analysis and may be constructed as follows: Suppose two Propositions, A and B, are being analyzed in a Paraconsistent Analysis Network where high contradiction in Proposition A may invalidate the analysis done by the evidences received by Proposition B. In this way, if there is no contradiction in the Analysis of Proposition A, the result of the analysis done in
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
138
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Proposition B is totally accepted, otherwise it will be refuted, whatever the result. This condition may be achieved by making the Normalized Degree of Contradiction, which is produced by the paraconsistent analysis done in the Node that analyses Proposition A, model the output of the analysis in the Analysis Node of Proposition B. Thus, the paraconsistent analysis of a Paraconsistent Analyzer Cube may be considered threedimensionally. In our study, in the three-dimensional space of x, y, z axis, the values of the Normalized Degree of Contradiction of Proposition A (µctrA) are exposed in the z axis, the values of the Favorable Degree of Evidence of Proposition B (μB) are exposed in x axis, and the values of the Unfavorable Degree of Evidence of Proposition B (λ B) are exposed in the y axis, according to figure 6.7.
Figure 6.7 Representation of the Normalized Degrees of Contradiction of Proposition A and of the Degrees of Evidences of Proposition B with the characteristic line segments of φEA.
In this figure, from a position with an angle of 45º between the Evidence axis in the xy plane, the Resultant Interval of Evidence φEA of Proposition A is represented by the axis of its values displaced from the origin having the Normalized Degree of Contradiction of Proposition A(µctrA) equal to 0.5. We verify that when the External Normalized Degree of Contradiction is 0.5, the Resultant Interval of Evidence φEextA will have a value equal to 1. In this condition, the lattice formed by the Degrees of Evidences of Proposition B will be completely free to present their values, which were computed by the Paraconsistent Analysis. Figure 6.8 shows this condition.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
139
Figure 6.8 Paraconsistent Analysis Lattice of Proposition B totally released by the value of the Normalized Degree of Contradiction of Proposition A equal to 0.5.
The interpolation point in the cube is represented by using three values of the
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
type:
P(μB , λB, μctrext) The extreme states in the vertices are represented by:
t F QT Q⊥
⇒ True = P(1, 0, 0.5) ⇒ μER= 1.0 ⇒ False = P(0, 1, 0.5) ⇒ μER= 0.0 ⇒ Quasi-Inconsistent = P(1, 1, 0.5) ⇒ μER= 0.5 ⇒ Quasi-Indeterminate = P(0, 0, 0.5) ⇒ μER= 0.5
and φEextA = 1.0 and φEextA = 1.0 and φEextA = 0.0 and φEextA = 0.0
When the external Normalized Degree of Contradiction is 0.25 the Resultant Interval of Evidence φEextA will have a value equal to 0.5. In this condition the lattice formed by the Degrees of Evidences of Proposition B will be limited by the value of the Resultant Interval of Evidence φEextA, which indicates a level of Indetermination in Proposition A. The Resultant Degree of Evidence obtained in Proposition B may have a variation up to 0,.5, that is, between a minimum of 0.25 to refute the Proposition, to a maximum of 0.75 to affirm the Proposition. Figure 6.9 shows this condition.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
140
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 6.9 Paraconsistent Analysis lattice of Proposition B limited by the value of the Normalized Degree of Contradiction of Proposition A equal to 0.25.
The interpolation points will be unable to reach the vertices of the lattice, and their maximum values will be: t→ ⊥ ⇒ True tending to Indeterminate P(0.75, 0.25, 0.25) ⇒ μER= 0.75 and φEextA = 0.5 F→⊥ ⇒ False tending to Indeterminate P(0.25, 0.75, 0.25) ⇒ μER= 0.25 and φEextA = 0.5 Qt→ ⊥ ⇒ Quasi-True tending to Indeterminate P(0.5, 0.0, 0.25) ⇒ μER= 0.75 and φEextA = 0.5 QF → ⊥ ⇒ Quasi-False tending to Indeterminate P(0.0, 0.5, 0.25) ⇒ μER= 0.25 and φEextA = 0.5 T→ ⊥ ⇒ Inconsistent tending to Indeterminate P(0.75, 0.75, 0.25) ⇒ μER= 0.5 and φEextA = 0.5 Q⊥ → F ⇒ Quasi-Indeterminate tending to False P(0.5, 1.0, 0.25) ⇒ μER= 0.25 and φEextA = 0.5 Q⊥ → t ⇒ Quasi-Indeterminate tending to True P (1.0, 0.50, 0.25) ⇒ μER= 0.75 and φEextA = 0.5 Q⊥ ⇒ Quasi-Indeterminate P (0.25, 0.25, 0.25) ⇒ μER= 0.5 and φEextA = 0.5
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
141
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
When the external Normalized Degree of Contradiction is 0.0, the Resultant Interval of Evidence φEextA will have a value equal to 0.0. In this condition, the lattice formed by the Degrees of Evidences of Proposition B will be totally limited by the value of the Resultant Interval of Evidence φEextA, which indicates the maximum level of Indetermination in Proposition A. The Resultant Degree of Evidence obtained in Proposition B must not have any variation. Figure 6.10 shows this condition.
Figure 6.10 Paraconsistent Analysis Lattice of Proposition B limited by the value of the Normalized Degree of Contradiction of Proposition A equal to 0.0.
The only interpolation point attracts a state of total Indetermination represented by: ⊥ ⇒ Indeterminate = P(0.5, 0.5, 0.0) ⇒ μER= 0.5 and φEextA = 0.0 When the External Normalized Degree of Contradiction is 0.75, the Resultant Interval of Evidence φEextA will have a value equal to 0.5. In this condition the lattice formed by the Degrees of Evidences of Proposition B will be limited by the value of the Resultant Interval of Evidence φEextA, which indicates a level of Inconsistency in Proposition A. The Resultant Degree of Evidence obtained in Proposition B may have a variation up to 0.5, that is, between a minimum of 0.25 to refute the Proposition, to a maximum of 0.75 to affirm the Proposition. Figure 6.11 shows this condition.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
142
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 6.11 Paraconsistent Analysis Lattice of the Proposition B limited by the value of the Normalized Degree of Contradiction of Proposition A equal to 0.75.
The interpolation point will be unable to reach the vertices of the lattice and will have maximum values in: t→ T ⇒ True tending to Inconsistent P(0.75, 0.25, 0.75) ⇒ μER= 0.75 and φEextA = 0.5 F→ T ⇒ False tending to Inconsistent P(0.25, 0.75, 0.75) ⇒ μER= 0.25 and φEextA = 0.5 Qt→ T ⇒ Quasi-True tending to Inconsistent P(0.5, 0.0, 0.75) ⇒ μER= 0.75 and φEextA = 0.5 QF→ T ⇒ Quasi-False tending to Inconsistent P(0.0, 0.5, 0.75) ⇒ μER= 0.25 and φEextA = 0.5 ⊥ → T ⇒ indeterminate tending to Inconsistent P (0.25, 0.25, 0.75) ⇒ μER= 0.5 and φEextA = 0.5 QT→ F ⇒ Quasi-Inconsistent tending to False P(0.5, 1.0, 0.75) ⇒ μER= 0.25 and φEextA = 0.5 QT→ t ⇒ Quasi-Inconsistent tending to True P(1.0, 0.75, 0.75) ⇒ μER= 0.75 and φEextA = 0.5 QT ⇒ Quasi-Inconsistent P(0.75, 0.75, 0.75) ⇒ μER= 0.5 and φEextA = 0.5
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
143
When the External Normalized Degree of Contradiction is 1.0 the Resultant Interval of Evidence φEextA will have a value equal to 0.0. In this condition, the lattice formed by the Degree of Evidences of Proposition B will be totally limited by the value of the Resultant Interval of Evidence φEextA, which indicate a maximum level of Inconsistency of Proposition A. The Resultant Degree of Evidence obtained in Proposition B must not have any variation. Figure 6.12 shows this condition.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 6.12 Paraconsistent Analysis Lattice of Proposition B limited by the value of the Normalized Degree of Contradiction of Proposition A equal to 1.0.
These states may be represented in two ways: with the Normalized Degree of Contradiction, or with the External Interval of Evidence in the z axis. LOGICAL STATES t ⇒ True F ⇒ False QT ⇒ Quasi – Inconsistent Q⊥ ⇒ Quasi- Indeterminate T ⇒ Inconsistent ⊥ ⇒ Indeterminate
REPRESENTATIONS P(μ,
P(μ,
λ, μext)
P(1.0, P(0.0, P(1.0, P(0.0, P(0.5, P(0.5,
0.0, 0.5) 1.0, 0.5) 1.0, 0.5) 0.0, 0.5) 0.5, 1.0) 0.5, 0.0)
P(1.0, P(0.0, P(1.0, P(0.0, P(0.5, P(0.5,
λ, φext) 0.0, 1.0) 1.0, 1.0) 1.0, 1.0) 0.0, 1.0) 0.5, 0.0) 0.5, 0.0)
The only interpolation point attracts a state of total Inconsistency represented by: T ⇒ Inconsistency = P(0.5,
0.5, 1.0)
⇒ μER= 0.5 and φEextA = 0.0
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
144
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
The Analyzer Cube with all its maximum states is seen in figure 6.13.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 6.13 Paraconsistent Analysis Cube with maximum values represented in the vertices
6.3 Algorithms of the Paraconsistent Analyzer Cube As seen before, considering two independent propositions, A and B, in the Paraconsistent Analyzer Cube the External Value of the Resultant Interval of Evidence of Proposition A will model the output of the analysis carried out by the evidences of Proposition B. This modeling may also be done by the Normalized Degree of Contradiction of Proposition A. As the condition of analysis of Proposition B will now be dependent on the value of A, the analysis must follow some of the sequences described as follows.
6.3.1 Modeling of the Paraconsistent Analyzer Cube with the Value of the External Interval of Evidence The External Resultant Interval of Evidence φEextA is an incoming value represented in the form of P(μ, λ, φEext). The symbol of the Analyzer Cube will be according to figure 6.14.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
145
φEext
P2
μ PAL2v
μER φEint
Paraconsistent Analyzer Cube
λ
Figure 6.14 Paraconsistent Analyzer Cube modeled with the External Interval of Evidence.
The sequence to describe the action of the Paraconsistent Analyzer Cube will be: 1. With the value of the External Resultant Interval of Evidence φ EextA, which comes from Proposition A, by using equation (4.8) and (4.9) we compute the maximum Degree of Evidence tending to logical state True and the maximum Degree of Evidence tending to logical state False. µEmax t =
1 + ϕ EextA 2
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
µEmax F =
1 − ϕ EextA 2
(6.6)
(6.7)
2. We determine the Value of the Resultant Degree of Evidence from Proposition B by using equation (4.2) reproduced below: (µ - λB ) + 1 µE = B 2 3. The following conditions are verified: If μE ≥ μEmaxt, then the value of the Certainty Degree DC is φEextA and the Contradiction Degree Dct is calculate by equation (3.22): Dct = 1 - φEext If μE ≤ μEmaxF, then the value of the Certainty Degree DC is -φEextA and the Contradiction Degree Dct is calculate by equation (3.22): Dct = 1 - φEext Else: We compute the value of the Certainty Degree DC and Contradiction Degree Dct using the Evidence Degrees from Proposition B. 4. We compute the value of the output Degree of Evidence by equation (4.3) reproduced below: D +1 µER = CR 2 5. The value of μER is presented as output result. -------------------------
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
146
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Example 6.3 Suppose that a Paraconsistent Analyzer Cube receives a signal of External Interval of Evidence originated from the analysis of a certain Proposition A, with a value φEextA of 0.63. The other two inputs are the Degrees of Evidence originated from information sources related to Proposition B, with the values especified below: Information source 1 μ1 = 0.88 Information source 2 μ2 = 0.85 a) Compute the maximum value of the External Degrees of Evidence. b) Determine the value of the Resultant Degree of Evidence from Proposition B. c) Present the value of the output Degree of Evidence according to the considerations of a Paraconsistent Analyzer Cube. Resolution a) Considering the Unfavorable Degree of Evidence from Proposition B that comes from Information source 2, we have in Proposition B: Information source 1 μ1 = 0.88 = Favorable Degree of Evidence from B Information source 2 μ2 = 0.85 λ = 1- 0.85 λ = 0.15 = Unfavorable Degree of Evidence from B In PAL2v the annotation is: (μ, λ) = (0.88, 0.15) Paraconsistent Logical Signal is: P (0.88, 0.15) Considering the External Interval of Evidence φEextA = 0.63, we compute by equations (6.6) and (6.7) the maximum Degree of Evidence tending to logical state True and the maximum Degree of Evidence tending to logical state False. 1 + 0.63 µEmax t = 2 µEmax t = 0.815 1 − 0.63 µEmax F = 2 µEmaxF = 0.185 b) From equation (4.2) we compute the Degree of Evidence of the analysis: 0.88 − 0.15 + 1 µE = 2 µ E = 0.865 c) As μE ≥ µEmax t value of the Certainty Degree DC is φEextA and the value of the Contradiction Degree Dct is calculate by equation (3.22): Dc= 0.63 D ct = 1- 0.63 D ct = 0.37 As DC > 0 we compute the Real Degree of Certainty through equation (3.20): DCR = 1 − (1− | 0.63 |)2 + 0.372 D CR = 1 - 0.523259 D CR = 0.476740981922 From the value of the Real Degree of Certainty we compute the value of the Real Degree of Evidence through equation (4.3): 0.476740981922 + 1 µER = 2 µER = 0.73837049 -------------------------
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
147
Example 6.4 Suppose there was a change in the Paraconsistent Analyzer Cube of the previous example. The value of the External Interval of Evidence originated from the analysis of a certain Proposition A remained 0.63. The other two inputs are the Degrees of Evidence, originated from information sources related to Proposition B, is now with the values. Information source 1 μ1 = 0.18 Information source 2 μ2 = 0.15 a) Compute the maximum value of the External Degrees of Evidence. b) Determine the value of the Resultant Degree of Evidence from Proposition B c) Present the value of the output Degree of Evidence according to the considerations of a Paraconsistent Analyzer Cube. Resolution: a) Considering the Unfavorable Degree of Evidence from Proposition B that comes from Information source 2, we have in Proposition B: Information source 1 μ1 = 0.18 = Favorable Degree of Evidence from B Information source 2 μ2 = 0.15 λ = 1- 0.15 λ = 0.85 = Unfavorable Degree of Evidence from B In PAL2v the annotation is: (μ, λ) = (0.18, 0.85) Paraconsistent Logical Signal is: P (0.18, 0.85) Considering the External Interval of Evidence φEextA = 0.63, we compute by equations (6.6) and (6.7) the maximum Degree of Evidence tending to logical state True and the maximum Degree of Evidence tending to logical state False. 1 + 0.63 µEmax t = 2 µEmax t = 0.815 1 − 0.63 µEmax F = 2 µEmaxF = 0.185 b) From equation (4.2) we compute the Degree of Evidence of the analysis: 0.18 − 0.85 + 1 µE = 2 µ E = 0.165 c) As μE ≤ µEmax t value of the Certainty Degree DC is -φEextA and the value of the Contradiction Degree Dct is calculate by equation (3.22): Dc= -0.63 D ct = 1- 0.63 D ct = 0.37 As DC < 0 we compute the Real Degree of Certainty through equation (3.21): DCR = (1− | −0.63 |)2 + 0.37 2 − 1 D CR = 0.523259 - 1 D CR = - 0.476740981922 From the value of the Real Degree of Certainty we compute the value of the Real Degree of Evidence through equation (4.3): -0.476740981922 + 1 µER = 2 µER = 0.2616295009 -----------------------------
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
148
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
The typical Paraconsistent Analyzer Cube algorithm is presented as follows.
6.3.2 Paraconsistent Analyzer Cube Algorithm modeled with Interval of Evidence
φext
PAL2v Analysis
μEr
μ φE(±)
λ
1. Enter the Input Values */ Favorable Degree of Evidence 0 ≤ μ ≤ 1 */ Unfavorable Degree of Evidence 0 ≤ λ ≤ 1 φEext. */ Interval of Evidence 0 ≤ φEext ≤ 1 2. Verify the conditions: If: φEext ≤ 0.25 Then Do S1 = 0.5 and S2= φEext: Indefinition and go to item 15 Else go to the next item 3. Compute the maximum Degree of Evidence tending to logical state True and the maximum Degree of Evidence tending to logical state False. 1 + ϕ Eext 1 − ϕ Eext µEmax t = µEmax F = 2 2
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ λ
4. Compute the Resultant Degree of Evidence (µ - λ ) + 1 µE = 2 5. Verify the conditions: Do DC = φEext Dct= 1- φEext If: μE ≥ μEmax t And go to item 11 If: μE ≤ μEmaxF Do DC = -φEext Dct= 1- φEext And go to item 11 Else go to the next item. 6. Calculate the Normalized Degree of Contradiction µ+λ µctr = 2 7. Calculate the Resultant Interval of Evidence φEint = 1- |2μctr -1| 8. Calculate the Degree of Certainty DC = µ - λ 9. Calculate distance d
φEext = φE(+) φEext = φE(+)
d = (1− | DC |) 2 + Dct 2 da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
149
10. Determine the output signal If φEint ≤ 0.25 or d ≥ 1 Then do S1= 0.5 e S2= φEint : Indefinition and Go to item 15 Else go to the next item 11. Determine the Real Degree of Certainty DCR = (1 - d) If DC > 0 DCR = (d - 1) If DC < 0 12. Determine the Resultant Interval of Evidence Signal If μctr < 0.5 Signal negative φEint = φE(-) If μctr > 0.5 Signal positive φEint = φE(+) φEint = φE(0) If μctr = 0.5 Signal zero 13. Calculate the Resultant Real Degree of Evidence D +1 µER = CR 2 14. Present the output results Do S1 = μER and S2= φE(±) 15. End
6.3.3 Modeling of a Paraconsistent Analyzer Cube with the value of the External Degree of Contradiction The Normalized Degree of Contradiction μCtrext is an incoming value represented in the form of P(μ, λ, μctrext). The symbol of the Analyzer Cube will be according to figure 6.15.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μctrext
P2
μ PAL2v
μER φEint
λ
Paraconsistent Analyzer Cube
Figure 6.15 Paraconsistent Analyzer Cube modeled with the External Normalized Degree of Contradiction.
The sequence to describe the action of the Paraconsistent Analyzer Cube will be: 1. With the value of the External Normalized Degree of Contradiction μ ctrext, which comes from Proposition A, by using equation (4.7) we compute the value of the External Resultant Interval of Evidence. φ EextA = 1- |2μctrext – 1 |
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
150
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
2. With the value of the External Resultant Interval of Evidence φ EextA, which comes from Proposition A, by using equation (6.6) and (6.7) we compute the maximum Degree of Evidence tending to logical state True and the maximum Degree of Evidence tending to logical state False. 1 + ϕ EextA µEmax t = 2 1 − ϕ EextA 2 3. Determine the Value of the Resultant Degree of Evidence from Proposition B through equation (4.2): µ - λB + 1 µE = B 2 µEmax F =
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
4. The following conditions are verified: If μE ≥ μEmaxt, then the value of the Certainty Degree DC is φEextA and the Contradiction Degree Dct is calculate by equation (3.22): Dct = 1 - φEext If μE ≤ μEmaxF, then the value of the Certainty Degree DC is -φEextA and the Contradiction Degree Dct is calculate by equation (3.22): Dct = 1 - φEext Else: We compute the value of the Certainty Degree DC and Contradiction Degree Dct using the Evidence Degrees from Proposition B. 5. We compute the value of the output Degree of Evidence by equation (4.3) reproduced below: D +1 µER = CR 2 6. The value of μER is presented as output result. -------------------------Example 6.5 Suppose that a Paraconsistent Analyzer Cube receives a signal of External Degree of Contradiction originated from an analysis of a certain Proposition A, with a value of 0.23. The other two inputs are Degrees of Evidence originated from information sources related to Proposition B, according to the following values: Information source 1 μ1 = 0.92 Information source 2 μ2 = 0.63 a) Compute the value of the External Interval of Evidence φEextA and maximum value of the External Degrees of Evidence. b) Determine the value of the Resultant Degree of Evidence from Proposition B c) Present the value of the output Degree of Evidence according to the considerations of a Paraconsistent Analyzer Cube. Resolution: a) Considering the Unfavorable Degree of Evidence from Proposition B that comes from Information source 2, we have in Proposition B: Information source 1 μ1 = 0.92 = Favorable Degree of Evidence of B Information source 2 μ2 = 0.63 λ = 1- 0.63 λ = 0.37 = Unfavorable Degree of Evidence de B
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
151
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
In PAL2v the annotation is: (μ, λ) = (0.92, 0.37) Paraconsistent Logical Signal is: P(0.92, 0.37) Considering the External Normalized Degree of Contradiction μCt ext = 0.23, we compute the value of the External Resultant Interval of Evidence through equation (4.7). φ EextA = 1- |2 x 0.23 – 1 | φEextA = 0.46 Considering the External Interval of Evidence φEextA = 0.46, we compute by equations (6.6) and (6.7) the maximum Degree of Evidence tending to logical state True and the maximum Degree of Evidence tending to logical state False. 1 + 0.46 µEmax t = 2 µEmax t = 0.73 1 − 0.46 µEmax F = 2 µEmaxF = 0.27 b) From equation (4.2) we compute the Degree of Evidence of the analysis: 0.92 − 0.37 + 1 µE = 2 µ E = 0.775 c) As μE ≥ µEmax t value of the Certainty Degree DC is φEextA and the value of the Contradiction Degree Dct is calculate by Dct= 1- φEextA: Dc= 0.46 D ct = 1- 0.46 D ct = 0.54 As DC > 0 we compute the Real Degree of Certainty through equation (3.20): DCR = 1 − (1− | 0.46 |)2 + 0.542 D CR = 1 - 0.763675323681 D CR = 0.2363246763 From the value of the Real Degree of Certainty we compute the value of the Real Degree of Evidence through equation (4.3): 0.2363246763 + 1 µER = 2 µER = 0.61816233816 ---------------------------Example 6.6 Suppose there was a change in the Paraconsistent Analyzer Cube of the previous example. The value of the External Normalized Degree of Contradiction originated from the analysis of a certain Proposition A remained 0.23. The other two inputs are the Degrees of Evidence, originated from information sources related to Proposition B, is now with the new values. Information source 1 μ1 = 0.12 Information source 2 μ2 = 0.11 a) Compute the value of the External Interval of Evidence and maximum value of the External Degrees of Evidence. b) Determine the value of the Resultant Degree of Evidence from Proposition B c) Present the value of the output Degree of Evidence according to the considerations of a Paraconsistent Analyzer Cube. Resolution:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
152
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
a) Considering the Unfavorable Degree of Evidence from Proposition B that comes from Information source 2, we have in Proposition B: Information source 1 μ1 = 0.12 = Favorable Degree of Evidence from B Information source 2 μ2 = 0.11 λ = 1- 0.11 λ = 0.89 = Unfavorable Degree of Evidence from B In PAL2v the annotation is: (μ, λ) = (0.12, 0.89) Paraconsistent Logical Signal is: P(0.12, 0.89) Considering the value of the External Normalized Degree of Contradiction μctrext = 0.23 we compute the value of the External Resultant Interval of Evidence through equation (4.7). φ EextA = 1- |2 x 0.23 – 1| φEextA = 0.46 Considering the External Interval of Evidence φEextA = 0.46, we compute by equations (6.6) and (6.7) the maximum Degree of Evidence tending to logical state True and the maximum Degree of Evidence tending to logical state False. 1 + 0.46 µEmax t = 2 µEmax t = 0.73 1 − 0.46 µEmax F = 2 µEmaxF = 0.27 b) From equation (4.2) we compute the Degree of Evidence of the analysis: 0.12 − 0.89 + 1 µE = 2 µ E = 0.115 c) As μE ≤ µEmaxF value of the Certainty Degree DC is -φEextA and the value of the Contradiction Degree Dct is calculate by Dct= 1- φEextA: Dc= -0.46 D ct = 1- 0.46 D ct = 0.54 As, DC 0 we compute the Real Degree of Certainty through equation (3.21): DCR = (1− | −0.46 |)2 + 0.542 − 1 D CR = 0.763675323681 - 1 D CR = - 0.2363246763 From the value of the Real Degree of Certainty we compute the value of the Real Degree of Evidence through equation (4.3): -0.2363246763 + 1 µER = 2 µER = 0.38183766184 --------------------------
The algorithm of the Paraconsistent Analyzer Cube with this kind of modeling is presented as follows.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
153
6.3.4 Paraconsistent Analyzer Cube Algorithm with the External Degree of Contradiction μctrext
PAL2v Analysis
μEr
μ
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
λ
φE(±)
1. Enter the Input values μ */ Favorable Degree of Evidence 0 ≤ μ ≤ 1 λ */ Unfavorable Degree of Evidence 0 ≤ λ ≤ 1 */ Normalized Degree of Contradiction 0 ≤ μctrext ≤ 1 μctrext 2. Compute the value of the External Resultant Interval of Evidence φ Eext φ Eext = 1- |2μCtrext – 1 | 3. Verify the conditions: If: φEext ≤ 0.25 Then Do S1 = 0.5 and S2= φEext: Indefinition and go to item 16 Else go to the next item 4. Compute the maximum Degree of Evidence tending to logical state True and the maximum Degree of Evidence tending to logical state False. 1 + ϕ Eext 1 − ϕ Eext µEmax t = µEmax F = 2 2 5. Compute the Resultant Degree of Evidence (µ - λ ) + 1 µE = 2 6. Verify the conditions: Do DC= φEext Dct= 1- φEext φEext = φE(+) If: μE ≥ μEmax t And go to item 12 If: μE ≤ μEmaxF Do DC= -φEext Dct= 1- φEext φEext = φE(+) And go to item 12 Else go to the next item. 7. Calculate the Normalized Degree of Contradiction µ+λ µctr = 2 8. Calculate the Resultant Interval of Evidence φEint = 1- |2μctr -1| 9. Calculate the Degree of Certainty DC = µ - λ 10. Calculate distance d d = (1− | DC |) 2 + Dct 2 11. Determine the output signal If φEint ≤ 0.25 or d ≥ 1 Then do S1= 0.5 e S2= φEint : Indefinition and Go to item 16 Else go to the next item
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
154
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
12. Determine the Real Degree of Certainty DCR = (1 - d) If DC > 0 DCR = (d - 1) If DC < 0 13. Determine the Resultant Interval of Evidence Signal If μctr < 0.5 Signal negative φEint = φE(-) If μctr > 0.5 Signal positive φEint = φE(+) φEint = φE(0) If μctr = 0.5 Signal zero 14. Calculate the Resultant Real Degree of Evidence D +1 µER = CR 2 15. Present the output results Do S1 = μER and S2= φE(±) 16. End
6.4 Paraconsistent Analysis Network Topologies with Analyzer Cubes The application of Analyzer Cubes in Proposition Analyses enables the development of different Paraconsistent Analysis Network topologies. A few of them are described as follows.
6.4.1 Paraconsistent Analysis Network with PAN and one Paraconsistent Analyzer Cubes
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Proposition A may produce External Interval of Evidence signal to model Proposition analyses through n Analyzer Cubes. Figure 6.16 shows this possibility where the existence of high contradiction in Proposition A reduces the certainty of the analysis referring to n propositions in the network.
μ1
Proposition A PA PAL2v
λ1
μ2
φEA φ EA Paraconsistent Analysis Node
Proposition C φEA PC
Proposition B φEA PB PAL2v
λ2
μER
μER φEB
Paraconsistent Analyzer Cube
μ3
φEA for the other P2 n Propositions
PAL2v
μER φEc
λ3
Paraconsistent Analyzer Cube
Figure 6.16 Paraconsistent Analysis Network with one Analysis Node producing Interval of Evidence to n Paraconsistent Analyzer Cubes.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
155
6.4.2 Paraconsistent Analysis Network with Inconsistency Filter composed of Paraconsistent Analyzer Cubes The Paraconsistent Analyzer Cube may be utilized to configurate Inconsistency Filters in an Analysis and Decision Network. This is an important configuration for signal analysis in pattern comparison. The configuration presented in figure 6.17 shows this possibility, where two propositions A and B are analyzed by Paraconsistent Analyzer Cubes. In this configuration each analysis carried out by the cubes generates its respective Intervals of Evidence which are mutually modeled.
Paraconsistent Analyzer Cube 1
Proposition A PA μA
PAL2v
μE1 φE1
λA φEA
Object Po
φEB
μ1
Proposition P2 PAL2v
μER φE0
Proposition B PB μB
λ3=1- μE2 PAL2v
μE2
Paraconsistent Analyzer Node
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
φE2
λB Paraconsistent Analyzer Cube 2 Figure 6.17 Configuration with two Cubes being mutually modeled for the filtering of Inconsistencies.
We verify that if the level of contradiction in one of the two propositions A or B is above a determined value, specified by their Intervals of Evidence, there will be the invalidation of next analysis, which is done by a Paraconsistent Analysis Node. This way, only evidences that present low values of contradiction will be analyzed in the Paraconsistent Analysis Node and will generate considerable values of Degree of Evidence. A Paraconsistent Analysis Network may be constructed with several modules using this kind of configuration, constituting thus a Signal Classifier System.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
156
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
6.5 Final Remarks In this chapter we presented a few configurations with Paraconsistent Analysis Systems interconnections (PANs) forming Uncertainty Treatment Networks. We studied the ways to improve the PAL2v Algorithms, like the input disabling processes with undefined values. And among the different topologies we presented an especial Paraconsistent Analysis Network (PANet) which deals with contradictions in a threedimensional way. By using the concepts of PAL2v an intense study of this especial configuration was done, which is called Paraconsistent Analyzer Cube. The algorithm of the Analyzer Cube was especially projected to develop the modeling of a threedimensional paraconsistent analysis. This Cube may be interconnected into Uncertainty Treatment Networks in several ways, mainly when one wishes to analyze two propositions. This three-dimensional configuration is important in the analysis of data originated from Uncertain Knowledge, once Decision Systems search for information through the analyses of several propositions, which, when considered altogether will provide evidences, so that the system is able to make decisions with greater precision and reliability. Thus, contradictions in certain proposition analysis may affect the value of the Degree of Certainty supplied by the analysis of an Object Proposition. By receiving input signals, representative of evidences, the three-dimensional analysis in the Analyzer Cube may be configured in a number of ways, depending on the project and the objective of the analysis that will be carried out. The modeling through Paraconsistent Analyzer Cubes provides safe information about which propositions have a higher or lower Degree of Contradiction. With this information the system is able to make more reliable decisions besides having the values to act on the input signal control, weakening or strengthening evidences to reduce contradictions. Paraconsistent Analysis Networks (PANet) are able to treat information signal originated from Uncertainty Knowledge database without having the conflicts invalidating the results. The especial characteristics of the Paraconsistent Analysis Network make them a good option to be applied in several fields of Artificial Intelligence. The analyses done through the configurations presented in this chapter produce robust decision systems able to bring result with high level of reliability. In the following chapters we will study the similarities of the PAL2v application methodology to the operation of human brain. These studies lead to the fundamentals of Paraconsistent Artificial Neural Network (PANNet) which will be presented in details.
Exercises 6.1 Define an Uncertainty Treatment Paraconsistent Network. 6.2 How does a Paraconsistent Analysis Network works when analyzes different Propositions? 6.3 Number the Rules that govern the operation of a typical Paraconsistent Analysis Network. 6.4 Describe the operation of a basic configuration of Paraconsistent Analysis Network (PANet). 6.5 How is a simple Paraconsistent Analysis Network constructed?
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
157
6.6 What is the purpose of performing the disabling of PANs in a Paraconsistent Analysis Network-PANet? 6.7 Describe the basic operation of a Paraconsistent Analysis Network-PANet with disabling of PANs. 6.8 Suppose a PAN receives two input signals of evidences: Information source 1 μ1 = 0.5 with Interval of Evidence φ1 =1 Information source 2 μ2 = 0.76 com Interval of Evidence φ2 = 0.84 Check if there are sources with undefined values and in case there are, perform the disabling of the undefined source and present the Resultant Degree of Evidence obtained in the PAN. 6.9 Suppose a PAN receives two input signals of evidences: From Information source 1 μ1 = 0.79 with Interval of Evidence φ1 = 0.8 From Information source 2 μ2 = 0.5 with Interval of Evidence φ2 =1 Check if there are sources with undefined values and in case there are, perform the disabling of the undefined source and present the Resultant Degree of Evidence obtained the PAN. 6.10 Suppose a PAN receives two input signals of evidences: From Information source 1 μ1 = 0.87 with Interval of Evidence φ1 = 0.2 From Information source 2 μ2 = 0.7 with Interval of Evidence φ2 = 0.79 Check if there are sources with undefined values and in case there are, perform the disabling of the undefined source and present the Resultant Degree of Evidence obtained the PAN. 6.11 How is a Three-dimensional Paraconsistent Analysis Network composed? 6.12 What is a Paraconsistent Analyzer Cube? 6.13 Describe the basic operation of a Paraconsistent Analyzer Cube. 6.14 Suppose a Paraconsistent Analyzer Cube is receiving External Interval of Evidence signal originated from the analysis of certain of Proposition A, with a value of 0.81. The other input refers to the Degrees of Evidence originated from information sources related to Proposition B, according to the following values: From Information source 1 μ1 = 0.89 From Information source 2 μ2 = 0.67 a) Compute the maximum value of the External Degrees of Evidence. b) Determine the value of the Resultant Degree of Evidence of Proposition B c) Present the output value of the Degree of Evidence according to the considerations of a Paraconsistent Analyzer Cube. 6.15 Suppose a Paraconsistent Analyzer Cube is receiving External Interval of Evidence signal originated from the analysis of a certain Proposition A, with a value of 0.61. The other input refers to the Degrees of Evidence originated from information sources related to Proposition B, according to the following values: From Information source 1 μ1 = 0.87 From Information source 2 μ2 = 0.61 a) Compute the maximum value of the External Degrees of Evidence. b) Determine the value of the Resultant Degree of Evidence of Proposition B c) Present the output value of the Degree of Evidence according to the considerations of a Paraconsistent Analyzer Cube.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
158
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
6.16 Suppose a Paraconsistent Analyzer Cube is receiving an External Interval of Evidence signal originated from an analysis of a certain Proposition A with a value of 0.83. The other input refers to the Degrees of Evidence originated from information sources related to Proposition B, according to the following values: From Information source 1 μ1 = 0.67 From Information source 2 μ2 = 0.78 a) Compute the maximum value of the External Degrees of Evidence. b) Determine the value of the Resultant Degree of Evidence of Proposition B. c) Present the output value of the Degree of Evidence according to the considerations of a Paraconsistent Analyzer Cube. 6.17 Suppose that in the Paraconsistent Analyzer Cube of the previous example, a change occurred in the External Interval of Evidence signal originated from an analysis of a certain Proposition A for a value of 0.4. The other input refers to the Degrees of Evidence originated from information sources related to Proposition B, remained with the same values. a) Compute the maximum value of the External Degrees of Evidence. b) Determine the value of the Resultant Degree of Evidence of Proposition B c) Present the output value of the Degree of Evidence according to the considerations of a Paraconsistent Analyzer Cube. 6.18 Suppose a Paraconsistent Analyzer Cube is receiving a signal of External Degree of Contradiction originated from an analysis of a certain Proposition A with a value of 0.19. The other input refers to the Degrees of Evidence originated from information sources related to Proposition B, according to the following values: From Information source 1 μ1 = 0.68 From Information source 2 μ2 = 0.34 a) Compute the value of the External Interval of Evidence. b) Determine the value of the Resultant Degree of Evidence of Proposition B c) Present the output value of the Degree of Evidence according to the considerations of a Paraconsistent Analyzer Cube. 6.19 Suppose a Paraconsistent Analyzer Cube is receiving a signal of External Degree of Contradiction originated from an analysis of a certain Proposition A with a value of 0.21. The other input refers to the Degrees of Evidence originated from information sources related to Proposition B, according to the following values: From Information source1 μ1 = 0.8 From Information source 2 μ2 = 0.64 a) Compute the value of the External Interval of Evidence and maximum value of the External Degrees of Evidence. b) Determine the value of the Resultant Degree of Evidence of Proposition B c) Present the output value of the Degree of Evidence according to the considerations of a Paraconsistent Analyzer Cube. 6.20 Suppose in the Paraconsistent Analyzer Cube from the previous example a change occurred in the signal of the External Normalized Degree of Contradiction originated from an analysis of a certain Proposition A for a value of 0.45. The other input refers to the Degrees of Evidence originated from information sources related to Proposition B, remained with the same values. a) Compute the value of the External Interval of Evidence and maximum value of the External Degrees of Evidence. b) Determine the value of the Resultant Degree of Evidence of Proposition B c) Present the output value of the Degree of Evidence according to the considerations of a Paraconsistent Analyzer Cube.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 6. Paraconsistent Analysis Network for Uncertainty Treatment
159
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
6.21 Use a common programming language and develop a computational program with the PAL2v Paraconsistent Analysis Algorithm with disabling of PAN. 6.22 Use a common programming language and develop a computational program with the Paraconsistent Analyzer Cube Algorithm modeled by the Interval of Evidence 6.23 Use a common programming language and develop a computational program with the Paraconsistent Analyzer Cube Algorithm modeled by the Degree of Contradiction.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
This page intentionally left blank
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Part 3
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Paraconsistent Artificial Neural Networks (PANNet)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
This page intentionally left blank
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
163
CHAPTER 7
Paraconsistent Artificial Neural Cell Introduction This chapter presents initially a short introduction to Neural Computing. It brings some considerations about the behavior of the human brain and the basic concepts for modeling applied in AI nowadays, through Artificial Neural Networks. These considerations present strong similarities between the behavior of the brain and the methodology that rules Decision-Making Systems based on Paraconsistent Annotated Logic (PAL), according to what was previously seen. From these initial considerations, we present the possibilities for using Paraconsistent Logic in the modeling of parts of the brain functioning and its application in decision making. Following this reasoning, in this chapter, we will present the modeling of an Algorithm which represents a Paraconsistent Artificial Neural Cell (PANC) constructed from the algorithms of the Paraconsistent Analysis Nodes (PANs) studied in the previous chapters.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
7.1 Neural Computation and Paraconsistent Logic Researches related to Artificial Intelligence have incessantly tried to model the behavior of the brain; this is because the obtaining of a mathematical model that determines the complete understanding of its functions would provide conditions to have all the processing of brain signals reproduced by some machine. Computers have brought greater speed and memory capacity, presenting faster signal processing. It has become possible to have simulations of mathematical models of neurons and complex brain activities, creating thus, a new research field called Computational Neuroscience or Neural computation. In Neural Computation, the mathematical models of brain functions are transformed into computational programs simulated by computers. Therefore, generally speaking, we may define Neural Computation as being the study of computational properties of human brain. The subjects dealt with in Neural Computation involve from a modeling for signal processing of a single neuron to the study of complex neural models. It’s common knowledge that the fundamental cell of a human brain is the neuron. Recent researches show that the number of neurons in human brain is around 1011 and each one of these neurons communicates continuously and in parallel with thousand of others. They process information signals and constitute an extremely complex structure. This extraordinary number of interconnected neurons shows high level complexity; this is what makes it difficult to meet logical-mathematical theories that represent adequate models for the functionality of the brain.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
164
Chapter 7. Paraconsistent Artificial Neural Cell
Since the purpose is to understand the functioning of the brain, modeling it under several structural levels, Neural Computation uses different approaches with computational techniques. The main issues handled in Neural computation research comprehend the search for computational proceedings of signal propagation among neurons and their interconnection in complex neural networks. These researches involve the modeling of large parts of the central nervous system of human and animals. The way to use mathematics to describe the complete functioning of the brain is known as mechanical procedure. Its main feature is to follow the proceedings of a top-down approach or “from top to bottom” where, initially, the global functioning is considered and afterwards, it follows on analyzing towards the parts. This kind of mechanical procedure resulted in Charles Babbage analytical machine and in Turing machine, which led to analogies and to considering a form of computational reasoning similar to the brain. However, differently from expected, the simple comparison between brain functioning and computer brings many divergences.
Environment Information and Objective or Situation to be Controlled
Recognition
Use of Knowledge, common sense, etc... If...Judgment...then... Decision
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
ACTION
Figure 7.1 Detection and processing of signals in human beings.
It was seen that a computer projected, having Turing machine as base, has signals treated sequentially, therefore, the present states always depend on the previous states and human brain does not work in the same way. A more detailed analysis shows that human brain has a more complex signal processing, and decision-making manners are much richer. From these studies, new mechanical models emerged, created from a different approach to model parts of human brain functions. Despite the complexity of the brain, where the factors that determine its functions are still obscure, researches have brought some familiar characteristics which can be modeled. One way to do this modeling is through Artificial Neural Networks (ANNs), whose main characteristic is the attempt to create independent elements that behave similarly to human neurons. A typical Neural Network is composed of independent, autonomous, interconnected elements, called artificial neurons.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 7. Paraconsistent Artificial Neural Cell
165
7.1.1 A Basic Paraconsistent Artificial Cell (bPAC) In the approach shown in the previous chapters, we verify that a Paraconsistent Analysis Node (PAN) treats incomplete and contradictory information. Its functioning presents a few characteristics similar to human behavior. By doing a partial analysis, we may consider that the results obtained in the treatment of contradictory signals with Paraconsistent Analysis Nodes (PANs) resemble the greater functioning of human brain.
Environment Information and Objective or Situation to be Controlled
Paraconsistent Analysis PROCESSING – JUDGEMENT
n Favorable Evidences μn
PAL2v Analysis
Transduction n Unfavorable Evidences λn
Resultant Degree of Evidence μER
DC φ Dct
ACTION
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 7.2 Simplified Scheme of PAL2v Actions
These considerations may lead us to conclude that the decision made from information acquired by the senses is a mental biological process, which is able to be modeled by Paraconsistent Annotated Logic with annotation of two values (PAL2v). We may then construct algorithms that represent Paraconsistent Artificial Neural Cells which will compose Decision Networks. The most precise interpretation of PAL2v algorithms allows us to consider that: 1- The result of the analysis for decision making may be analyzed by the values of the Degrees of Certainty DC and of Contradiction Dct. These two output values may be transformed into Degrees of Evidence μER and into Normalized Degree of Contradiction μctr, with values between 0 and 1, respectively. 2- The value of the Degree of Contradiction Dct which can be transformed, by means of normalization, into a value of Interval of Evidence φ, is an indicative, whose main function is to inform the level of inconsistency that exists among the information received. A better representation may be done through the calculus of the Interval of Evidence φE. Therefore, by the end of the information processing, represented by the values of the input Degrees of Evidence μ and λ, a low value of Certainty or a high value of Contradiction will result in an indefinition I. When comparing the values of the Degrees of Certainty and of Contradiction, limit values may be inserted in the analysis, this will make it possible to determine if the result of the paraconsistent analysis is an
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
166
Chapter 7. Paraconsistent Artificial Neural Cell
indefinition or not. In figure 7.3, we present the flowchart and the paraconsistent analysis structure, representative of a bPAC, which present characteristics of PAL2v analysis. Basic Paraconsistent Artificial Cell - bPAC Limit Values Cn
μ
λ
bPAC
(0 ≤ Cn ≤ 1) Vccs = ......... Vcci =............ Vctci =........... Vctcs = ..........
Paraconsistent Analysis T
Vccs Vcci
Information Values
Favorable Degree of Evidence μ =........ ( 0 ≤ μ ≤ 1 ) Unfavorable Degree of Evidence λ =.........( 0 ≤ λ ≤ 1 )
F
Vctcs
t
Vctci ⊥
Calculi DC = μ - λ Dct = (μ + λ) - 1
YES
NO
Vctcs < Dct > Vctci
?
YES Vcci < DC < Vccs
S2b
Dct
DC
t
F
S2a= Dct S2b= 0.5 S1 = I
Output is Undefined
S2a= Dct S2b= 0.5 S1 = I
? Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Output is Undefined
S2a
NO
YES
DC ≤ Vcci
?
Output is False
S2a= DCt S2b= DC S1 = F
NO
Output is True
S2a= DCt S2b= DC S1 = t
Figure 7.3 Flowchart and representation of the Basic Paraconsistent Artificial Cell (bPAC).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
I
Chapter 7. Paraconsistent Artificial Neural Cell
167
Based on these verifications we can develop a functional cell, which analyzes and treats signals according to the methodology procedures applied in the fundamental concepts for the applications of PAL2v. We consider a Simplified Algorithm as a small cell that ponders the information values received through the sensors. Let us call it Basic Paraconsistent Artificial Cell (bPAC). From the exposed, we may give the following description of a basic Paraconsistent Artificial Cell (bPAC): “A Basic Paraconsistent Artificial Cell is an element able to, once a pair of Favorable and Unfavorable Evidence (μ, λ) is presented at its input, supply an output result composed of one of the three logical states: Dct = Degree of Contradiction, DC = Degree of Certainty or I = Indefinition”. A Basic Paraconsistent Artificial Cell (bPAC), as presented in the flowchart of the previous figure, is easily implemented in any programming language. This cell is constructed by using a special form for the interpretation of Annotated Paraconsistent Logic (PAL).
7.2 The Standard Paraconsistent Artificial Neural Cell (sPANC) A Paraconsistent Artificial Neural Cell is formed from a Basic Paraconsistent Artificial Cell (bPAC). The same way a Basic Artificial Paraconsistent Cell was defined, a definition of a Paraconsistent Artificial Neural Cell (PANC) may be given as follows: “A Paraconsistent Artificial Neural Cell is the element able to, once a pair of Favorable and Unfavorable Evidence (μ, λ) is presented at its input, supply an output result composed of a value of Resultant Degree of Evidence μE of the analysis and a value of Resultant Interval of Evidence φE”. μ
λ
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
PANC Paraconsistent Analysis T
Adjustment Factors F
t
⊥
φE
μE Figure 7.4 Representation of a Paraconsistent Artificial Neural Cell (PANC).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
168
Chapter 7. Paraconsistent Artificial Neural Cell
In a PANC the values of the Resultant Degree of Evidence μE and Interval of Evidence φE will be calculated exactly as it was done in the Paraconsistent Analysis Nodes (PANs). Despite using the same equations, the difference between a Paraconsistent Analysis Node (PAN) and a Paraconsistent Artificial Neural Cell (PANC) is that the latter utilizes external factors to control levels of Contradiction and Certainty permitted in the control of the Analysis and in decision actions. Thus, to form this first Paraconsistent Neural Cell, which we call Standard Paraconsistent Artificial Neural Cell (sPANC), the inclusion of these new concepts is done. 7.2.1 sPANC Fundamental Concepts We will describe the fundamental concepts and the basic elements to implement a Standard Paraconsistent Artificial Neural Cell (sPANC). We will start with the definitions of the factors which are the external limit values; these values will allow the modeling of the analysis. All the concepts and fundamentals of Annotated Paraconsistent Logic will be joined and utilized to implement a sPANC, which will generate the many types of Cells to compose a typical Paraconsistent Artificial Neural Network (PANNet).
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
7.2.1.1 Contradiction Tolerance Factor CtrTF We saw that in a PAL2v Paraconsistent Analysis, if between the two input signals there exists a high Degree of Inconsistency, represented by a high value of the Degree of Contradiction Dct, no conclusion in respect to the Proposition analyzed can be extract. In the Standard Paraconsistent Artificial Neural Cell (sPANC), whenever the Degree of Contradiction exceeds the limit values of previously established contradiction, it is concluded that the resulting output logical state is Inconsistent or Indeterminate, therefore, the output value will be the Degree of Evidence of indefinition which is equal to 0.5. According to what was seen about the Paraconsistent Analysis Nodes - PANs, the Interval of Evidence φE expresses the Degree of Inconsistency that exists at the inputs. In the algorithms of the Cells, the Interval of Evidence φ E, or even the value of the Normalized Degree of Contradiction μctr itself, may be utilized to make comparison. In this study the Normalized Degree of Contradiction will be utilized to be compared to an external value, which we call Contradiction Tolerance Factor-CtrTF. Therefore, in the sPANC, the CtTF is a value adjusted externally, and its purpose is to determine the limits of bearable contradiction between the two signals applied at the inputs. Following the fundamentals of a Paraconsistent Analysis, the Contradiction Tolerance Factor must be adjusted between the values within the closed interval of the real numbers [0,1], therefore: 0 ≤ CtrTF ≤ 1 From the value of CtrTF, established externally, two limit values of contradiction are obtained to be utilized in the Algorithm: Contradiction Control Superior Value - CtrCSV
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 7. Paraconsistent Artificial Neural Cell
169
Contradiction Control Inferior Value - CtrCIV The limit values for comparison are calculated by: CtrCSV =
1 + CtrTF 2
and
CtrCIV =
1 - CtrTF 2
(7.1)
Where:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
CtrCSV = Contradiction Control Superior Value CtrCIV = Contradiction Control Inferior Value These two values will enter the representative Algorithm of the PANC to compare and obtain the output conclusion. Some considerations concerning this factor may be weaved, according to the following: The Contradiction Tolerance Factor-CtrTF is applied in the Paraconsistent Analysis Cell as a value of comparison, which may be adjusted externally; therefore, in a Paraconsistent Artificial Neural Network it will be utilized as control signal, whose finality is to specify the Degree of tolerance between the contradictory signals in particular analyses of certain regions of the network. Through the equation, it is easily verified that, when the value of the CtrTF is adjusted to 0, the limit values CtrCSV and CtrCIV will be 0.5. This means inexistence of tolerance to contradiction, therefore, if the analysis presents any value of the Degree of Contradiction, the conclusion is that the output will be 0.5, leading the analysis to a total Indefinition. With the value of CtrTF adjusted to maximum, that is CtrTF =1.0, the CtrCIV will have the minimum value of 0.0, and CtrCSV will have the maximum value of 1.0. This means that contradiction is irrelevant and the output will be evaluated only by the value of the Resultant Degree of Evidence μE. The decision about the free calculus of the output signal, represented by the value of the Resultant Degree of Evidence μE is related to the analysis of the value of contradiction taken from the conditions: If: Ctr CSV > μctr > CtrCIV Else: μE = 0.5 (Indefinition) Where, by using the same notations done in the PANs, in the Standard Paraconsistent Artificial Neural Cell –sPANC, the Normalized Degree of Contradiction is calculated through equation (4.5): µ+λ µctr = 2 Through equation (4.2) we calculate then the value of the Resultant Degree of Evidence: (µ - λ ) + 1 µE = 2 When more precision is desired in the analysis and also the value of the Interval of Evidence, the calculus of the Real Degree of Evidence is done together with the value of the Real Interval of Evidence through equations: μER = Resultant Real Degree of Evidence calculated through equation (4.3) reproduced below: D +1 µER = CR 2 Where: DCR = Real Degree of Certainty, obtained through equations (3.20) and (3.21).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
170
Chapter 7. Paraconsistent Artificial Neural Cell
φE(±) = Signaled Interval of Evidence, obtained through equation (4.7) reproduced below: φ E = 1- |2μctr – 1| -------------------------Example 7.1 In a Paraconsistent Artificial Neural Cell the Contradiction Tolerance Factor- CtrTF adjusted externally is 0.7. a) Determine the limit values of comparison with the Normalized Degree of Contradiction. b) Bearing in mind that the Favorable Degree of Evidence μ= 0.83 and the Unfavorable Degree of Evidence λ= 0.38 are applied at the inputs, determine the value of the Real and Calculated Degree of Evidence with its output Interval of Evidence of the Cell. c) In case the Favorable Degree of Evidence at the Cell input changes to μ = 0.9 and the Unfavorable Degree of Evidence to λ= 0.88, determine the value of the Real and Calculated Degree of Evidence with the output Interval of Evidence of the Cell. Resolution: a) Favorable Degree of Evidence μ = 0.83 Unfavorable Degree of Evidence λ = 0.38 In PAL2v the annotation is: (μ, λ) = (0.83, 0.38) Paraconsistent Logical Signal is: P (0.83, 0.38) Through equation 7.1 we determine the superior and inferior limit values of Contradiction: 1 + 0.7 CtrCSV = = 0.85 Maximum Limit of Contradiction CtrCSV = 0.85 2 CtrCIV =
1 - 0.7 = 0.15 2
Minimum Limit of Contradiction CtrCIV = 0.15
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
b) Through equation (4.5) we determine the Normalized Degree of Contradiction: µctr =
0.83 + 0.38 2
μ ctr = 0.605 As the condition CtrCSV > μctr > CtrCIV 0.85 > 0.605 > 0.15 is satisfied: Then the output will be the Calculated Degree of Evidence through (4.2) (0.83 - 0.38) + 1 µE = 2 μ E = 0.725 The Real Degree of Evidence DCR will be calculated by: Calculus of DC Favorable Degree of Evidence μ = 0.83 Unfavorable Degree of Evidence λ = 0.38 From equation (2.2) we calculate the Degree of Certainty: DC = 0.83 – 0.38 DC = 0.45 From equation (2.3) we calculate the Degree of Contradiction
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 7. Paraconsistent Artificial Neural Cell
171
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Dct = (0.83 + 0.38) - 1 Dct = 0.21 As DC > 0 we compute the Real Degree of Certainty through equation (3.20): DCR = 1 − (1− | 0.45 |)2 + 0.212 D CR = 1 - 0.5887274411 D CR = 0.4112725 From the value of the Real Degree of Certainty we compute the value of the Real Degree of Evidence through equation (4.3): 0.4112725 + 1 µER = 2 µER = 0.70563627 O valor Normalized Degree of Contradiction is calculated through equation (4.5): 0.83 + 0.38 µctr = 2 μ ctr = 0.605 The value of the Resultant Interval of Evidence is calculated through equation (4.7) φE = 1- |2 x 0.605 – 1 | φE = 0.79 c) Favorable Degree of Evidence μ = 0.9 Unfavorable Degree of Evidence λ = 0.88 In PAL2v the annotation is: (μ, λ) = (0.9, 0.88) Paraconsistent Logical Signal is: P (0.9, 0.88) Through equation (4.5) we determine the Normalized Degree of Contradiction: 0.9 + 0.88 µctr = 2 μ ctr = 0.89 As the condition CtrCSV > μctr > CtrCIV is not satisfied 0.85 < 0.89 Then the output will be the Degree of Indefinition. μ E = 0.5 The value of the Resultant Interval of Evidence is calculated through equation (4.7) φ E = 1- |2 x 0.89 – 1 | φ E = 0.22 -----------------------It is seen from the conditions presented that if the Normalized Degree of Contradiction μct is above the limit values specified by the Contradiction Tolerance Factor-CtrTF, the output logical state is Undefined. With this, the output will have indefinition as result due to contradiction represented by the value of the Resultant Degree of Evidence μER equal to 0.5. Using the equation (7.1) that determines the limits related to the Contradiction Tolerance Factor we may affirm that: a) The Contradiction Tolerance Factor CtrTF can be utilized as a controlling signal that will enable and disable the functioning of the Cell. b) When the Contradiction Tolerance Factor CtrTF is adjusted to a high value (close to 1) the Paraconsistent Artificial Neural Cell will have high tolerance to the values of Degrees of Contradiction. Thus, it will allow representative output values of
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
172
Chapter 7. Paraconsistent Artificial Neural Cell
low Resultant Degree of Evidence, that is, close to Indefinition 0.5. Hence, when the Contradiction Tolerance Factor CtrTF is adjusted to the value 1.0 we will have: Contradiction Control Superior Value - Ctr CSV = 1.0 Contradiction Control Inferior Value - CtrCIV = 0.0 This means that the Contradiction Tolerance Factor CtrTF has no effect over the output signal of the cell μER. c) When the Contradiction Tolerance Factor CtrTF is adjusted to a low value (close to zero) it means that the cell has low tolerance to the Degrees of Contradiction μctr and, therefore it will only admit high output values of Degrees of Evidence. The low values will be rejected, and the cell output will be taken to Indefinition. Hence, when the Contradiction Tolerance Factor CtrTF is adjusted to a value of 0.0 we will have: Contradiction Control Superior Value – CtrCSV = 0.5 Contradiction Control Inferior Value – Ctr CIV = 0.5 This means that the cell output will always be a value of Indefinition state. Hence the PANC will be inactive supplying an undefined value of 0.5.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
7.2.1.2 Certainty Tolerance Factor CerTF In Paraconsistent Artificial Neural Cells, in some cases, one may wish to establish the minimum limit value of the Resultant Degree of Evidence μE to be considered as relevant in later analysis. To define this limit, a value called Certainty Tolerance Factor- CerTF is introduced externally in the Cell In the Paraconsistent Analysis between the two information signals, when the result is a low value of Resultant Degree of Evidence μE, that is, μE it is very close to 0.5, it means that the information data in respect to the analyzed Proposition are low, which may mean, therefore, that it is not necessary to take into consideration its value in the later analyses. In the PANs, we saw that this could be specified internally in the Algorithm; however, the Certainty Tolerance Factor CerTF is used in the PANC to promote a better control process in the network, this will be adjusted externally. In the Paraconsistent Artificial Neural Network, the value of the Certainty Tolerance Factor CerTF, when compared with the SPANC analysis result, will define whether the information is sufficient or insufficient to be considering in the later analysis. In the Algorithm of the Standard Paraconsistent Artificial Neural Cell SPANC, the Certainty Tolerance Factor- CerTF, an external value previously adjusted, is introduced to be compared with the Resultant Degree of Evidence μE, obtained by the equations. To use the Certainty Tolerance Factor CerTF the limits are initially calculated through: 1 + CerTF 1 - CerTF CerCSV = CerCIV = and (7.2) 2 2 Where: CerCSV = Certainty Control Superior Value. CerCIV = Certainty Control Inferior Value
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 7. Paraconsistent Artificial Neural Cell
173
Certainty Tolerance Factor-CerTF is applied in the Paraconsistent Analysis Cell as a value of comparison, which may be adjusted externally; this must be done according the rules established for PAL2v, therefore: 0 ≤ Cer TF ≤ 1 The decision concerning the output signal is related to the certainty of the analysis carried out by the Cell. As initial calculate the output value of the Degree of Evidence is obtained through equation (4.2): (µ - λ ) + 1 µE = 2
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
The Resultant Degree of Evidence μE will be calculated from the following conditions: If: μE ≤ CerCIV or μE ≥ CerCSV Then: The output will be the calculated value of the Degree of Evidence obtained through equation (4.2). Or when more precision is needed in the analysis through equation (4.3). If: CerCIV < μE < CerCSV Then: μE = 0.5 (Indefinition) ---------------------------Example 7.2 In a Paraconsistent Artificial Neural Cell the Certainty Tolerance FactorCTF adjusted externally is 0.4. a) Determine the limit values of Certainty comparison. b) Consider the input of Favorable Degree of Evidence μ = 0.9 and Unfavorable Degree of Evidence λ= 0.28. Determine the value of the Real and Calculated Degree of Evidence with its output Interval of Evidence of the Cell. c) Suppose the input values have changed, and are now: Favorable Degree of Evidence μ = 0.5 and Unfavorable Degree of Evidence λ= 0.52. Determine the value of the Real and Calculated Degree of Evidence with the output Interval of Evidence of the Cell. Resolution: a) Favorable Degree of Evidence μ = 0.9 Unfavorable Degree of Evidence λ = 0.28 In PAL2v the annotation is: (μ, λ) = (0.9, 0.28) Paraconsistent Logical Signal is: P (0.9, 0.28) Through equation 8.1 we determine the superior and inferior limit Values of Certainty: 1 + 0.4 = 0.7 2 Maximum Limit Value of Certainty Cer CSV = 0.7
CerCSV =
1 - 0.4 = 0.3 2 Minimum Limit Value of Certainty CerCIV = 0.3
CerCIV =
b) Through equation (4.2) determine the Calculated Degree of Evidence:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
174
Chapter 7. Paraconsistent Artificial Neural Cell
µE =
(0.9 - 0.28) + 1 2
μE = 0.81 μE ≥ CerCSV 0.81 ≥ 0.7 The Real Degree of Evidence will be calculated through equation (4.3): Calculus of DC Unfavorable Degree of Evidence μ = 0.9 Unfavorable Degree of Evidence λ = 0.28 ( μ, λ) = ( 0.9, 0.28) From equation (2.2) we calculate the Degree of Certainty: DC = 0.9 – 0.28 DC = 0.62 From equation (2.3) we calculate the Degree of Contradiction Dct = (0.9 + 0.28) - 1 Dct = 0.18 As DC > 0 we compute the Real Degree of Certainty through equation (3.20):
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
As in the condition
DCR = 1 − (1− | 0.62 |)2 + 0.182 D CR = 1 - 0.4204759 D CR = 0.579524 From the value of the Real Degree of Certainty we compute the value of the Real Degree of Evidence through equation (4.3): 0.579524 + 1 µER = 2 µER = 0.789762 The Normalized Degree of Contradiction value is calculated through equation (4.5): 0.9 + 0.28 µctr = 2 μctr = 0.59 The value of the Resultant Interval of Evidence is calculated through equation (4.7) φE = 1- |2 x 0.59 – 1 | φE = 0.82 c) Favorable Degree of Evidence μ = 0.5 Unfavorable Degree of Evidence λ = 0.52 In PAL2v the annotation is: (μ, λ) = (0.5, 0.52) Paraconsistent Logical Signal is: P (0.5, 0.52) Through equation (4.2) we determine the Resultant Degree of Evidence μE: µE =
(0.5 − 0.52) + 1 2
μE = 0.49 As the condition: μE ≤ CerCIV or μE ≥ CerCSV is not satisfied The Normalized Degree of Contradiction value is calculated through equation (4.5):
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 7. Paraconsistent Artificial Neural Cell
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
µctr =
175
0.5 + 0.52 2
μctr = 0.51 The value of the Resultant Interval of Evidence is calculated through equation (4.7) φ E = 1- |2 x 0.51 – 1 | φE = 0.98 ------------------------From the presented conditions, it is seen that if the Resultant Real Degree of Evidence μE is below the limit values specified by the Certainty Tolerance FactorCerTF, that is close to Indefinition 0.5, the output logical state is Undefined. With this, the output will have a result as Indefinition, represented by the value of the Resultant Degree of Evidence μE equal to 0.5. Verifying the equations that determine the limits related to the Certainty Tolerance Factor, we can affirm that: a) The Certainty Tolerance Factor CerTF may be utilized as a controlling signal, which will enable and disable the functioning of the Cell. b) When the Certainty Tolerance Factor CerTF is adjusted to a high value, that is, close to 1.0, the Paraconsistent Artificial Neural Cell will present low tolerance to the values of Resultant Degree of Evidence and; therefore it does not admit low output values of Resultant Degrees of Evidences, that is close to 0.5. The low values, that is, close to 0.5 will be rejected, and the output will be taken to Indefinition. Thus, when the Certainty Tolerance Factor CerTF is adjusted to the value 1.0 we will have: CerCSV = Certainty Control Superior Value = 1.0 CerCIV = Certainty Control Inferior Value = 0.0 This means that the Certainty Tolerance Factor CerTF has total effect over the output signal μE. Therefore, the Cell will only admit maximum output values of Degrees of Evidence, that is, μE = 0.0 or μE = 1.0. c) When the Certainty Tolerance Factor CerTF is adjusted to a low value, that is, close to zero, the Cell presents high tolerance to the Resultant Degrees of Evidence μE calculated through the equations. Therefore, it may admit low output values of Resultant Degrees of Evidence, that is, close to 0.5. Thus, when the Certainty Tolerance Factor CerTF is adjusted to the value CerTF = 0.0 we will have: CerCSV = Certainty Control Superior Value = 0.5 CerCIV = Certainty Control Inferior Value = 0.5 This means that any value of the output Degree of Evidence will exceed the limits the output will always be the value of Resultant Degrees of Evidence μE calculated through the equations.
7.2.1.3 Decision Tolerance Factor Dec TF In the Standard Paraconsistent Artificial Neural Cell (sPANC), a limit value applied externally, called Decision Tolerance Factor (DecTF), will compare and define an output logical state. The definition of the output logical state is done through the value of DecTF compared with the value of the Resultant Degree of Evidence μE obtained through PAL2v equations.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
176
Chapter 7. Paraconsistent Artificial Neural Cell
In the analysis carried out by the sPANC, if the Resultant Degree of Evidence μE is a value close to Indefinition, that is, close to 0.5, this means that the input values are insufficient, or else they have a high level of contradiction, therefore in any of the two hypothesis we conclude that the Cell has no condition to express a final conclusion about the analysis. In the sPANC the Decision Tolerance Factor DecTF generates two limit values called: Falsehood Limit Value FLV and Truth Limit Value TLV. These values will be compared with the results of the analysis through the value of the Resultant Degree of Evidence μE. As the state of Indefinition has a value of 0.5, the values that limit Truth and Falsehood are symmetric with reference to Indefinition. Therefore, within the characteristics of the Paraconsistent Analysis that utilizes PAL2v, the Decision Tolerance Factor DecTF is adjusted in the real closed interval between 0.0 and 1.0. Having these criteria established, in the sPANC, the limit values of Falsehood and of Truth are obtained through the equations, as follows: TLV =
1 + DecTF 2
and
FLV =
1 - DecTF 2
(7.3)
Where: TLV = Truth Limit Value. FLV = Falsehood Limit Value
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
and: 0 ≤ DecTF ≤ 1 The results of the analysis will be established under three conditions: 1- If the value of the Resultant Degree of Evidence μE, found through the equations, is equal or lower than the Falsehood Limit Value FLV, the final decision is the logical state “False”. 2- If the value of the Resultant Degree of Evidence μE is equal or higher than the Truth Limit Value TLV, the final decision is the logical state “True”. 3- If the value of the Resultant Degree of Evidence μE remains within the limit values, then the decision is “Indefinition”. We may also write: If: μE ≥ TLV then: output logical state is “True” and μER =1 If: μE ≤ FLV then: output logical state is “False” and μER = 0 Else: the logical state is “Undefined” and the Resultant Degree of Evidence is μER =0.5 --------------------------Example 7.3 In a Paraconsistent Artificial Neural Cell, the Decision Tolerance Factor - DecTF adjusted externally is 0.4. a) Determine the limit values of Decision comparison. b) Consider the Favorable Degree of Evidence μ = 0.87 and the Unfavorable Degree of Evidence λ= 0.23 applied at the input. Determine the value of the Real Degree of Evidence with its output Interval of Evidence of the Cell. c) Suppose there were changes in the input values and the value of the Favorable Degree of Evidence became μ = 0.25 and the Unfavorable Degree of Evidence became λ= 0.83. Determine the value of the Real and Calculated Degree of Evidence with its output Interval of Evidence of the Cell. Resolution: a) Favorable Degree of Evidence μ = 0.87
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 7. Paraconsistent Artificial Neural Cell
177
Unfavorable Degree of Evidence λ = 0.23 In PAL2v the annotation is: (μ, λ) = (0.87, 0.23) Paraconsistent Logical Signal is: P (0.87, 0.23) Decision Tolerance Factor - DecTF = 0.4 Through equation 7.3 we determine the superior and inferior limit values of Decision:
TLV =
1 + 0.4 = 0.7 2
Limit Value of Decision TLV = 0.7
FLV =
1 - 0.4 = 0.3 2
Limit Value of Decision FLV = 0.3
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
b) The Real Degree of Evidence will be calculated through equation (4.3): Calculus of DC Favorable Degree of Evidence μ = 0.87 Unfavorable Degree of Evidence λ = 0.23 (μ, λ) = (0.87, 0.23) From equation (2.2) we calculate the Degree of Certainty: DC = 0.87 – 0.23 DC = 0.64 From equation (2.3) we calculate the Degree of Contradiction Dct = (0.87 + 0.23) - 1 Dct = 0.1 As DC > 0 we calculate the Real Degree of Certainty through equation (3.20): DCR = 1 − (1− | 0.64 |) 2 + 0.12 D CR = 1 - 0.37363 D CR = 0.626369 From the value of Real Degree of Certainty we calculate the value of the Real Degree of Evidence through equation (4.3): 0.626369 + 1 µER = 2 µ ER = 0.81318458 As in the conditional: μER ≥ TLV 0.81318458 ≥ 0.7 Then: The logical state is Truth and µER = 1 The value of the Normalized Degree of Contradiction is calculated through equation (4.5): 0.87 + 0.23 µctr = 2 μ ctr = 0.55 The value of the Resultant Interval of Evidence is calculated through equation (4.7) φ E = 1- |2 x 0.59 – 1 | φ E = 0.9 c) With the new values: Favorable Degree of Evidence μ = 0.25
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
178
Chapter 7. Paraconsistent Artificial Neural Cell
Unfavorable Degree of Evidence λ = 0.83 In PAL2v the annotation is: (μ, λ) = (0.25, 0.83) Paraconsistent Logical Signal is: P (0.25, 0.83) Decision Tolerance Factor - DecTF = 0.4 From equation (2.2) we calculate the Degree of Certainty: DC = 0.25 – 0.83 DC = - 0.58 From equation (2.3) we calculate the Degree of Contradiction Dct = (0.25 + 0.83) - 1 Dct = 0.08 As DC < 0 we calculate the Real Degree of Certainty through equation (3.21): DCR = (1− | 0.58 |)2 + 0.082 − 1 D CR = 0.4275511665 - 1 D CR = - 0.5724488 From the value of Real Degree of Certainty we calculate the value of the Real Degree of Evidence through equation (4.3): -0.572448833 + 1 µER = 2 µ ER = 0.213775583
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
As in the conditional:
μER ≤ FLV 0.213775583 ≤ 0.3 Then: The logical state is False and
µER = 0.0
The value of the Normalized Degree of Contradiction is calculated through equation (4.5): 0.25 + 0.83 µctr = 2 μ ctr = 0.54 The value of the Resultant Interval of Evidence is calculated through equation (4.7) φ E = 1- |2 x 0.54 – 1 | φ E = 0.92 ------------------------According to the equations, when the Decision Tolerance Factor is adjusted externally to null DTF = 0.0, both the limits are 0.5. This means that for any value of the Resultant Degree of Evidence μE above 0.5, the cell decides for the conclusion of logical state “True” µER = 1.0, and for any value of the Resultant Degree of Evidence μE below 0.5 the decision is the logical state “False” µ ER = 0.0 . When the adjustment of a Tolerance Factor is maximum, that is DecTF = 1.0, the Falsehood Limit Value FLV is equal to zero and the Truth Limit Value TLV is equal to 1.0. This means that a final conclusion in respect to the Proposition analyzed will only be obtained if the Resultant Degree of Evidence μE, calculated through the equations, reach its maximum values; affirming the Proposition, therefore, True equal to 1.0, or refuting the Proposition in its minimum value, equal to zero. If the result of
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 7. Paraconsistent Artificial Neural Cell
179
μE does not get to these extremes, the Cell is unable to draw a conclusion; therefore, it will assume an output of μER equal to 0.5, signaling an Indefinition in the analysis. We verify from the characteristics of the Decision Tolerance Factor that it will only be utilized in Cells that are expected to offer a final conclusion to the analysis. In this way, when this factor is active the action of the other two factors, seen previously, will be excluded. 7.2.1.4 Learning Factor lF The sPANC may be configured as a Learning Paraconsistent Artificial Neural Cell (lPANC). In the Learning process of a lPANC, which is done through a special algorithm, the Learning Factor lF is introduced. The Learning Factor, with its value adjusted externally, is used in a special pattern learning equation. Its variation increases and decreases the Cell learning speed. The Learning Algorithm of the Cell can transform the Learning Factor (lF) into an Unlearning Factor (ulF) and make the Cell learn and unlearn patterns. The Learning Factor (lF) is a real value, in the closed interval [0,1] attributed arbitrarily through external adjustments. Depending on its value, a faster or slower learning is provided to the lPANC. In the Learning Cell, besides the Learning Factor, no other action from the other factors is necessary; therefore in the configuration of this Cell, only lF will be active. The training of a Learning Cell with the action of the Learning Factor lF will be seen in a future chapter.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
7.2.1.5 Negation Operator (NOT) Any Logical System has the need to activate the negation function “NOT” to satisfy the equations involved in the process controls. The Paraconsistent Artificial Neural Cells (PANCs), by utilizing the fundamental implications of PAL2v, are able to present this function. A logical negation, in the result of the analysis carried out by the sPANC, is obtained by changing the values of the Favorable Degree of Evidence μ by the Unfavorable Degree of Evidence λ. Function “NOT” will be described in the following manner: Consider the values of μ and λ as Favorable and Unfavorable Degrees of Evidence applied to the analysis in a Standard Paraconsistent Artificial Neural Cell sPANC. The Resultant Degree of Evidence is obtained through equation (4.2) reproduced below: (µ - λ ) + 1 µE = 2 And the Negated Degree of Evidence is obtained through: µEn =
(λ - µ) + 1 2
---------------------------
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
(7.4)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
180
Chapter 7. Paraconsistent Artificial Neural Cell
Example 7.4 Consider a PANC input with the Favorable Degree of Evidence represented by μ and Unfavorable Degree of Evidence represented by λ, whose values are presented as follows: Favorable Degree μ = 1 and Unfavorable Degree λ = 0 a) Determine the Resultant Degree of Evidence at the output. b) Determine the Resultant Degree of Evidence of the logical negation of the value obtained in item a. Resolution: a) With these, we have the notation according to the fundamentals of PAL2v: Favorable Degree μ= 1 Unfavorable Degree λ=0 In PAL2v the annotation is: (μ, λ) = (1.0, 0.0) Paraconsistent Logical Signal is: P (1.0, 0.0) Through equation (4.2) we obtain the value of the Resultant Degree of Evidence μE: (1 - 0) + 1 µE = 2 μE =1.0, which is equal to the logical state “True”. b) By changing the notation the value of the Favorable Degree of Evidence by the value of the Unfavorable Degree of Evidence l, we have: Through equation (7.4) we now obtain: (0 - 1) + 1 µEn = 2 μEn = 0, which is equal to the logical state “False”, this is the negation. In PAL2v the annotation is: (λ, μ) = (0.0, 1.0) Paraconsistent Logical Signal is: P (0.0, 1.0) ---------------------------Example 7.5 Suppose the Favorable Degree of Evidence μ = 0.87 and the Unfavorable Degree of Evidence λ = 0.22 are inputs of a PANC. a) Determine the Resultant Real Degree of Evidence at the output. b) Determine the Resultant Real Degree of Evidence of logical negation of the value obtained in item a. Resolution: a) With these values, Favorable Degree μ = 0.87 Unfavorable Degree λ = 0.22 In PAL2v the annotation is: (μ, λ) = (0.87, 0.22) Paraconsistent Logical Signal is: P (0.87, 0.22) The Real Degree of Evidence will be calculated through equation (4.3): Calculus of DC From equation (2.2) we calculate the Degree of Certainty: DC = 0.87 – 0.22 DC = 0.65 From equation (2.3) we calculate the Degree of Contradiction Dct = (0.87 + 0.22) - 1 Dct = 0.09 As DC > 0 we calculate the Real Degree of Certainty through equation (3.20): DCR = 1 − (1− | 0.65 |)2 + 0.092
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 7. Paraconsistent Artificial Neural Cell
181
D CR = 1 - 0.3613862 D CR = 0.6386137 From the value of Real Degree of Certainty we calculate the value of the Real Degree of Evidence through equation (4.3): 0.6386137 + 1 µER = 2 µ ER = 0.8193068 b) By changing the value of the Favorable Degree of Evidence by the value of the Unfavorable Degree of Evidence: Favorable Degree μ = 0.22 Unfavorable Degree λ = 0.87 In PAL2v the annotation is: (μ, λ) = (0.22, 0.87) Paraconsistent Logical Signal is: P (0.22, 0.87) The Real Degree of Evidence will be calculated through equation (4.3): The Real Degree of Evidence will be calculated through equation (4.3): From equation (2.2) we calculate the Degree of Certainty: DC = 0.22 – 0.87 DC = - 0.65 From equation (2.3) we calculate the Degree of Contradiction Dct = (0.22 + 0.87) - 1 Dct = 0.09 As DC < 0 we calculate the Real Degree of Certainty through equation (3.21): DCR = (1− | 0.09 |)2 + 0.652 − 1 D CR = 0.3613862199 - 1 D CR = - 0.63861378 From the value of Real Degree of Certainty we calculate the value of the Real Degree of Evidence through equation (4.3):
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
µERN =
- 0.63861378 + 1 2
µ ERN = 0.1806931 -----------------------We verify that for values of contradictory Degrees of Evidence where: μ = 0 and λ = 0, or then, μ = 1 and λ = 1 the result of the logical negation obtained through equation (7.4) is an Indefinition of μER = 0.5. These results are in accordance to the expected values by the studies presented in the PAL2v theories.
7.2.1.6 Complementation Operator (Complement Operator) The Paraconsistent Artificial Neural Cells function by treating signals with values in the real closed interval between 0 and 1.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
182
Chapter 7. Paraconsistent Artificial Neural Cell
Depending on the analysis carried out, any signal may be complemented up to the unit. In this way, the Complement Operator, when applied, transforms the resulting signal into its unit complement through equation: μ1c = 1 - μ1 Therefore, the Complement Operator is used to transform the Favorable Degrees of Evidence into Unfavorable Degree of Evidence. Figure 7.5 presents the Complement Operator of the Annotated Paraconsistent Logic. COMPLEMENT Operator μ1
The complement operator subtracts the input signal from the unit μ1C = 1 - μ1
μ1 μ 1C C
Where: μ1 = input signal value μ1C = input signal complemented value.
μ1C
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 7.5 Paraconsistent Annotated Logic (PAL) Complement Operator.
In a Paraconsistent Artificial Neural Network - PANNet, each of the component cells will receive input information signals in the form of representative values of Resultant Degree of Evidence. These signals come from the analysis carried out by other cells, which are interconnected to other points of the net. The analysis between the Degrees of Evidence means that the equations will only be done with real positive values between 0 and 1, in accordance to the fundamentals of PAL2v. To enable the analyses in the PANNet, considering only Degrees of Evidence, the Complement Operator is installed at one of the cell inputs. In the Standard Paraconsistent Artificial Neural Cell (sPANC), the Complement Operator is utilized to perform the transformation of a Favorable Degree of Evidence μ into an Unfavorable Degree of Evidence λ. This cell input, where the action of the Complement Operator will occur, will be called “Unfavorable Degree of Evidence Input”. Therefore, to make the Cells work internally only with Degrees of Evidence, the signal applied at the input of the Unfavorable Degree of Evidence will always be complemented. μr → μrC = 1 - μr = λ In a Paraconsistent Analysis where the two sources a and b, respectively, send signals considered favorable evidences, the Complement is done. It is then applied as Unfavorable Evidence input in the equation, to obtain the Resultant Degree of Evidence. In this case, considering the fundamentals of PAL2v, the annotation for the calculus of the Resultant Degree of Evidence is as follows: Source a μa Source b μb (μa, λb) Where: λ b = 1- μb (7.5) -------------------------
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 7. Paraconsistent Artificial Neural Cell
183
Example 7.6 Two Information sources are sending evidences relative to a determined Proposition P, in the form of Favorable Degrees of Evidence: Source 1 μ1 = 0.75 Source 2 μ2 = 0.65 Consider Source 1 as Unfavorable Degree of Evidence input: a) Represent the notation according to PAL2v. b) Determine the output Resultant Real Degree of Evidence. Resolution: a) Consider: Source 1 μ1 = 0.75 Source 2 μ2 = 0.65 Applying the Complement Operator to Source 1: λ1 = 1 - μ1 λ1 = 0.25 λ1 = 1 - 0.75 With these values we have the PAL2v annotation: (μ, λ) = (0.65, 0.25) The Real Degree of Evidence will be calculated through equation (4.3): Calculus of DC From equation (2.2) we calculate the Degree of Certainty: DC = 0.65 – 0.25 DC = 0.4 From equation (2.3) we calculate the Degree of Contradiction Dct = (0.65 + 0.25) - 1 Dct = - 0.1 As DC > 0 we calculate the Real Degree of Certainty through equation (3.20):
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
DCR = 1 − (1− | 0.4 |) 2 + 0.12 D CR = 1 - 0.6082762 D CR = 0.391723746 From the value of the Real Degree of Certainty we calculate the value of the Real Degree of Evidence through equation (4.3): µER =
0.391723746 + 1 2
µ ER = 0.695861873 ------------------------
7.3. Composition of the Standard Paraconsistent Artificial Neural Cell (sPANC) The Paraconsistent Artificial Neural Cells will follow the same criteria seen in the studies about the Paraconsistent Analysis Nodes (PANs). The main remarks that should be made concerning the composition of the PANC are: 1- When a cell is inactive due to adjustment factors, it presents an output signal of Undefined Resultant Degree of Evidence, equal to 0.5. In a Neural Network that utilizes several interconnected cells, the properly adjusted factors will optimize the functioning of the cells, controlling them so as to present maximum efficiency in the analysis process.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
184
Chapter 7. Paraconsistent Artificial Neural Cell
2- Each component Cell of the paraconsistent analysis network will receive input information signals in the form of values representative of the Resultant Degree of Evidence. These signals come from the analysis carried out by other cells which are interconnected. The analyses between Degrees of Evidence mean that the equation will be done only with positive real values between 0 and 1, in accordance with the fundamentals of PAL2v. 3- In the Standard Paraconsistent Artificial Neural Cell (sPANC), the Complement Operator, following the fundamentals of PAL2v, is utilized to perform the transformation of a Favorable Degree of Evidence μ into an Unfavorable Degree of Evidence λ. To enable the analysis in a PANNet, considering only Degrees of Evidence, the Complement Operator is then utilized; it is installed at one of the cell inputs. Therefore, to make the Cells work only with Degrees of Evidence the signal applied at the input of the Unfavorable Degree of Evidence will be complemented: Having established the criteria and made the considerations based on the basic concepts presented, an Algorithm is constructed; this will comprehend the main concepts studied, allowing the composition of a Standard Paraconsistent Artificial Neural Cell (sPANC). The Algorithm of a Standard Paraconsistent Artificial Neural Cell will be equipped with all the Control Factors and equations for a Paraconsistent Analysis. Thus, the suppression of some lines of the Algorithm allow changes in its characteristics, transforming it into a different kind of Cell utilized by a typical Paraconsistent Artificial Neural Network. Figure 7.6 presents a sPANC with the inputs for analysis, inputs for the internal control adjustments, and processing-resultant outputs. μ
λ μ = Favorable Degree of Evidence λ = Unfavorable Degree of Evidence
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
sPANC C
CtrTF
C
CtrTF = Contradiction Tolerance Factor CerTF = Certainty Tolerance Factor DecTF = Decision Tolerance Factor lF = Learning Factor
Paraconsistent Analysis T
CerTF DecTF
F
t
lF
φE = Resultant Interval of Evidence μER = Resultant Real Degree of Evidence
S2= φE
⊥
μER Figure 7.6 Standard Paraconsistent Artificial Neural Cell (sPANC)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 7. Paraconsistent Artificial Neural Cell
185
The complete Algorithm with all the necessary command lines to be used in Paraconsistent Analysis Networks, comprehending all the input, control, and output signals that appear in the sPANC, is shown as follows.
7.3.1. Algorithm of the Standard Paraconsistent Artificial Neural Cell (sPANC)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1- Enter the value of the input Degree of Evidence 1 μ1 */ Degree of Evidence 1 0 ≤ μ1 ≤ 1 */ 2- Enter the value of the input Degree of Evidence 2 μ2 */ Degree of Evidence 2 0 ≤ μ2 ≤ 1 */ 3- Enter the value of the Contradiction Tolerance Factor CtrTF = C1 */ Contradiction Tolerance Factor 0 ≤ CtrTF ≤ 1 */ 4 - Compute the Contradiction Control Superior and Inferior Value. 1 + C1 1 - C1 and CtrCSV = CtrCIV = 2 2 5-Enter the value of the Certainty Tolerance Factor CerTF = C2 */ Certainty Tolerance Factor 0 ≤ CerTF ≤ 1 */ 6- Compute the Certainty Control Superior and Inferior Value. 1 + C2 1 - C2 and CerCSV = CerCIV = 2 2 7-Enter the value of the Decision Tolerance Factor DecTF = C3 */ Decision Tolerance Factor 0 ≤ DecTF ≤ 1 */ 8- Compute the Superior and Inferior value of Decision. 1 + C3 1 - C3 and TLV = FLV = 2 2 9- Enter the value of the Learning Factor lF = C4 */ Learning Factor 0 ≤ lF ≤ 1 */ 10-Transform the Degree of Evidence 1 into Unfavorable Degree of Evidence λ1 = 1- μ1 */ Unfavorable Degree of Evidence 0 ≤ λ1 ≤ 1 */ 11- Transform the Degree of Evidence 2 into Unfavorable Degree of Evidence λ2 = 1- μ2 */ Unfavorable Degree of Evidence 0 ≤ λ2 ≤ 1 */ 12- Compute the Normalized Degree of Contradiction µ+λ µctr = 2 13- Compute the Resultant Interval of Evidence φ E = 1- |2μctr -1 | 14- Compute the Resultant Degree of Evidence (µ - λ ) + 1 µE = 1 2 2 15- Compute the Complementation of the Resultant Degree of Evidence μEC = 1 - μE 16 - Compute Distance d
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
186
Chapter 7. Paraconsistent Artificial Neural Cell
DCR = 1 − (1− | μ − λ |) 2 + ((μ + λ ) − 1) 2
17- Compute the Resultant Real Degree of Evidence Do: μER = 1 - d/2 If μE > 0.5 Do: μER = d/2 If μE < 0.5 Do: μER = μE If μE = 0.5 18. Do the Complementation of the Resultant Real Degree of Evidence μERC = 1- μER
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
. Outputs 19- Present the resulting output signals from the conditionals: and CerCSV ≤ μER ≤ CerCIV If: CtrCSV > μctr > CtrCIV S 2 = φE Then : S 1 = μE */Indefinition*/ Else: S1 = 0.5 S 2 = φE 20 – Do CtrTF =1 and present at the output If: φE ≤ 0.25 or d ≥ 1 S2 = φE */Indefinition*/ Then: S 1 = 0.5 Else: S1 = μER S 2 = φE 21 – Do CtrTF =1 and present at the output If: CerCSV ≤ μER ≤ CerCIV Then : S1 = μ1 S2 = 0.5 */Indefinition*/ Else: S1 = 0.5 S 2 = 0.5 22 – Do CtrTF = 1 and CerTF = 0 and present at the output the value of the Learned Degree of Evidence S 1 = μER (k+1) S2 = 0.5 23- Do CtrTF = 1 and CerTF = 0 and present the resulting output signals from the conditionals: For Maximization Do: S1 = μ1 */Input Evidence Greater Value*/ If μE > 0.5 S2 = 0.5 If μE < 0.5 Do: S1 = λ2 */ Input Evidence Greater Value*/ S 2 = 0.5 For Minimization Do: S1 = λ2 */Input Evidence Lower Value*/ If μE > 1/2 S2 = 0.5 If μE < 1/2 Do: S1 = μ1 */Input Evidence Lower Value*/ S 2 = 0.5 24- Do CtrTF = 1 and CerTF = 0 and present the resulting output signals from the conditionals: For Maximization Do: S1 = μ1 */Input Evidence Greater Value*/ If μE > 0.5 S 2 = 0.5 */Output S2 Undefined*/
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 7. Paraconsistent Artificial Neural Cell
If μE < 0.5
Do: S1 = 0.5 S 2 = λ2
187
*/Output S1 Undefined/ */Input Evidence Greater Value*/
For Minimization If μE > 0.5
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Do: S1 = λ2 */Input Evidence Lower Value*/ S 2 = 0.5 */Output S2 Undefined*/ Do: S1 = 0.5 */Output S1 Undefined*/ If μE < 0.5 */ Input Evidence Lower Value */ S 2 = μ1 25- Do CtrTF = 1 and CerTF = 0 and present the resulting output signals from the conditionals: For maximization If: μE > 0.5 ⇒ μ1 > μ2 It results at the output in: S 1 = μ1 and S2 =0.5 Else: S1 = 0.5 and S2 = μ2 For minimization If: μE < 0.5 ⇒ μ1 < μ2 It results at the output in: S 1 = μ1 and S2 = 0.5 Else: S1 = 0.5 and S2 = μ2 26- Do CerTF = 0 and present the resulting output signals from the conditionals: If: CtrCSV > μctr > CtrCIV Then: S1 = 1 and S2 = φE Else: S1 = 0 and S2 = φE 27 - Do CerTF = 0 and present the resulting output signals from the conditionals: */True*/ If: μE ≥ TLV then: S1= 1 If: μE ≤ FLV then: S1= 0 */False*/ Else: S1 = 0.5 */Indefinition*/ 28 - End Since the sPANC is represented by an Algorithm, the enabling and disabling of the lines will configuration the different types of Paraconsistent Artificial Neural Cells with several purposes. Each sPANC will be specially configured to form different kinds of Cells and each one of them will present a determined function, which will help the PANNet to perform analysis, and to define behavior similar to human brain. Thus, a Paraconsistent Artificial Neural Network (PANNet) is constructed with cells projected by means of several purposes computational programs, all of which use the theoretical concepts of Paraconsistent Annotated Logic (PAL).
7.4 Final Remarks In this chapter we presented the fundamentals and main concepts for the construction of algorithms that represent Paraconsistent Artificial Neural Cells (PANC). We saw that the Basic Paraconsistent Artificial Cell (bPAC) originated the Standard Paraconsistent Artificial Neural Cell (sPANC), which is an Algorithm with easy computational representation. The sPANC Algorithm, which was completely presented, may easily have lines suppressed; it will be transformed into several different configurations. This will be utilized in Artificial Intelligence projects, where the treatment of data originated from Uncertain Knowledge is necessary.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
188
Chapter 7. Paraconsistent Artificial Neural Cell
All the factors and control values were considered, based on the theoretical structure and methodology of PAL2v aforementioned. The concepts presented and utilized in the composition of the Standard Cell Algorithm will be used in the construction of several types of cells, which will compose a Paraconsistent Artificial Neural Network (PANNet). Therefore, the Standard Paraconsistent Artificial Neural Cell (sPANC) presented in this chapter may be considered a mother-cell (celula matter) which will originate a family of Paraconsistent Artificial Neural Cells (PANCs). They compose the Neural computing projects and Intelligent Systems fundamented on Paraconsistent Annotated Logic (PAL). Through the Algorithm of the Standard Paraconsistent Artificial Neural Cell (sPANC), other algorithms are modeled in new ways to be adapted to various Paraconsistent Decision Systems. With the Standard Paraconsistent Artificial Neural Cell (PANC), it is possible to find efficient modelings that can be applied in AI, and so, in the following chapter, new types of Paraconsistent Artificial Neural Cells will be configured to compose a family of totally implemented cells to perform Paraconsistent Analysis in complex systems.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Exercises 7.1 Describe what neural computating is. 7.2 Make a comparison between the familiar/ common functions of the human brain and the signal treatment through PAL2v. 7.3 Describe the functioning of a basic Paraconsistent Cell (bPC). 7.4 Give the definition of a Basic Paraconsistent Artificial Cell (bPAC). 7.5 Give the definition of a Paraconsistent Artificial Neural Cell (PANC). 7.6 Develop in language C or other common programming language, and obtain the executable program that represents a Basic Paraconsistent Artificial Cell (bPAC). 7.7 Define the Contradiction Tolerance Factor CtrTF. 7.8 In a Paraconsistent Artificial Neural Cell the Contradiction Tolerance Factor- CtrTF adjusted externally is 0.77. a) Determine the limit values of comparison Normalized Degree of Contradiction. b) By applying input Favorable Degree of Evidence μ = 0.86 and Unfavorable Degree of Evidence λ = 0.33, determine the value of the Real and Calculated Degree of Evidence with the Interval of Evidence of the Cell output. c) Supposing there were changes in the inputs, and now the Favorable Degree of Evidence is μ = 0.91 and the Unfavorable Degree of Evidence is λ= 0.89, determine the value of the Real and Calculated Degree of Evidence with the Interval of Evidence of the Cell output. 7.9 Suppose in a Paraconsistent Artificial Neural Cell the Contradiction Tolerance Factor- CtrTF adjusted externally in 0.68. a) Determine the limit values of Normalized Degree of Contradiction comparison. b) Bearing in mind that the input Favorable Degree of Evidence is μ = 0.87 and Unfavorable Degree of Evidence is λ = 0.31, determine the value of Real and Calculated Degree of Evidence with the Interval of Evidence of the Cell output. c) Supposing there were changes in the inputs, and now the Favorable Degree of Evidence is μ = 0.89 and the Unfavorable Degree of Evidence is λ= 0.79, determine
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Chapter 7. Paraconsistent Artificial Neural Cell
189
the value of the Real and Calculated Degree of Evidence with the Interval of Evidence of the Cell output. 7.10 Define Certainty Tolerance Factor CerTF. 7.11 What value of Contradiction Tolerance Factor CtrTF makes the Paraconsistent Artificial Neural Cell present an output value of Indefinition ? 7.12 What value of Contradiction Tolerance Factor- CtrTF does not obstruct the output Degree of Evidence results due to high contradiction between evidence values applied at the inputs of the Paraconsistent Artificial Neural Cell? 7.13 In a Paraconsistent Artificial Neural Cell the Certainty Tolerance Factor-CerTF adjusted externally is 0.45. a) Determine the limit values of the Certainty comparison. b) Bearing in mind that the input Favorable Degree of Evidence is μ = 0.92 and the Unfavorable Degree of Evidence is λ= 0.29, determine the value of the Real and Calculated Degree of Evidence with the Interval of Evidence of the Cell output. c) Supposing there were changes in the inputs, and now the Favorable Degree of Evidence is μ = 0.53 and the Unfavorable Degree of Evidence is λ= 0.51, determine the value of the Real and Calculated Degree of Evidence with the Interval of Evidence of the Cell output. 7.14 Suppose that in a Paraconsistent Artificial Neural Cell the Certainty Tolerance Factor- CerTF was adjusted externally in 0.75. a) Determine the limit values of Certainty comparison. b) Bearing in mind that the input Favorable Degree of Evidence is μ = 0.72 and the Unfavorable Degree of Evidence is λ= 0.39, determine the value of the Real and Calculated Degree of Evidence with the Interval of Evidence of the Cell output. c) Bearing in mind there were changes in the inputs, and now the Favorable Degree of Evidence is μ = 0.63 and the Unfavorable Degree of Evidence is λ = 0.37, determine the value of the Real and Calculated Degree of Evidence with the Interval of Evidence of the Cell output. 7.15 Describe the main purposes of the Contradiction CtrTF and Certainty Tolerance Factor CerTF in a PANC. 7.16 What is Decision Tolerance Factor - DecTF? What is it used for in a PANC,? 7.17 In a Paraconsistent Artificial Neural Cell, the Decision Tolerance Factor - DecTF adjusted externally is 0.57. a) Determine the limit values of Decision comparison. b) Bearing in mind the input Favorable Degree of Evidence is μ = 0.85 and the Unfavorable Degree of Evidence is λ= 0.22, determine the value of the Real Degree of Evidence with the Interval of Evidence of the Cell output. c) Bearing in mind there were changes, and now the Favorable Degree of Evidence is μ= 0.24 and the Unfavorable Degree of Evidence is λ= 0.86, determine the value of the Real and Calculated Degree of Evidence with the Interval of Evidence of the Cell output. 7.18 In a Paraconsistent Artificial Neural Cell the Decision Tolerance Factor - DecTF adjusted externally is 0.63. a) Determine the limit values of Decision comparison. b) Bearing in mind that the Favorable Degree of Evidence is μ = 0.88 and the Unfavorable Degree of Evidence is λ= 0.21, determine the value of the Real Degree of Evidence with its Interval of Evidence of the Cell output. c) After changes in the inputs, the Favorable Degree of Evidence is now μ = 0.26 and the Unfavorable Degree of Evidence is λ= 0.84, determine the valor of the Real and Calculated Degree of Evidence with the Interval of Evidence of the Cell output.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
190
Chapter 7. Paraconsistent Artificial Neural Cell
7.19 How is the Logical Negation obtained in a PANC? 7.20 Consider the input of a PANC with the Favorable Degree of Evidence represented by μ and the Unfavorable Degree of Evidence represented by λ whose values are presented as follows: μ = 0.0 and λ = 1.0 a) Determine the output Resultant Degree of Evidence μE. b) Determine the Resultant Degree of Evidence of the logical negation μEN of the value obtained in item a. 7.21 Consider the input of a PANC where the Favorable Degree of Evidence μ = 0.89 and the Unfavorable Degree of Evidence λ = 0.21. a) Determine the output Resultant Real Degree of Evidence μER. b) Determine the Resultant Real Degree of Evidence of the logical negation μERN of the value obtained in item a. 7.22 Two Information sources send evidences relative to a certain Proposition P. The evidences come in the form of Favorable Degrees of Evidences: Source 1 μ1 = 0.77 Source 2 μ2 = 0.67 Consider Source 1 as the input of Unfavorable Degree of Evidence: a) Represent the annotation according to PAL2v. b) Determine the output Resultant Real Degree of Evidence μER. c) Determine the Resultant Real Degree of Evidence of the logical negation μERN d) Represent the annotation according to PAL2v. 7.23 Two Information sources send evidences relative to a certain Proposition P. The evidences come in the form of Favorable Degrees of Evidences: Source 1 μ1 = 0.87 Source 2 μ2 = 0.57 Consider Source 1 as the input of Unfavorable Degree of Evidence: a) Represent the annotation according to PAL2v. b) Determine the output Resultant Real Degree of Evidence. c) Determine the Resultant Real Degree of Evidence of the logical negation μERN d) Represent the annotation according to PAL2v. 7.24 Describe how the complementation of the input and output signals of a PANC is obtained? 7.25 Based on the algorithms, project the executable program of the Basic Paraconsistent Artificial Neural Cell (bPANC) in language C, or in any other common language.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
191
CHAPTER 8
Paraconsistent Artificial Neural Cell Family Introduction
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
This chapter presents Paraconsistent Artificial Neural Cells which compose a family of analysis Cells originated from the configuration called Standard Paraconsistent Artificial Neural Cell (sPANC), whose algorithm was studied in the previous chapter. With the presented procedures and the considerations made to implement a sPANC it is possible to implement algorithms which will represent each one of the cells of a Paraconsistent Artificial Neural Network (PANNet) as follows. We will see in this chapter how the Standard Paraconsistent Artificial Neural Cell (sPANC), represented by its algorithm, produces a family of Paraconsistent Artificial Neural Cells (PANCs) containing distinct cells. Each one of these cells is a result of the improvement and modifications of the sPANC algorithm. They will present different functions, all based on the equations of Paraconsistent Annotated Logic. In this way, each cell will be represented by an algorithm, which will allow its programming in conventional language, enabling easy implementation in applications of Decision Systems in AI.
8.1 Family of Paraconsistent Artificial Neural Cells The Paraconsistent Annotated Logic, through the methodology which resulted in the formation of the Paraconsistent Analysis Nodes (PANs), and finally in the Standard Paraconsistent Artificial Neural Cell (sPANC), as presented in the previous chapter, conveys a good tool in the area of AI to project systems able to deal with real situations and help in decision making. From the basic structure of the Standard Paraconsistent Artificial Neural Cell (sPANC), we will now introduce a family of Paraconsistent Artificial Neural Cells (PANCs) containing components of distinct Cells. Each one of the cells presented is originated from the improvement and modifications on the sPANC algorithm. These will have different functions; all based on the Paraconsistent Annotated Logic equations. Each PANC family component is configured through the selection of lines of the Algorithm of the Standard Paraconsistent Artificial Neural Cell. The disabling or the activation of the algorithm command lines of the sPANC supports the construction of the Paraconsistent Analysis Artificial Neural Cells of several functions and various purposes in a Decision Network.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
192
Chapter 8. Paraconsistent Artificial Neural Cell Family
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
8.2 The Analytical Paraconsistent Artificial Neural Cell (aPANC) The Analytical Paraconsistent Artificial Neural Cell (aPANC) has the function to receive the Degrees of Evidence, analyze them and make the interconnection among Paraconsistent Artificial Neural Network Cells. This cell associates the Resultant Degrees of Evidence according to the purposes of the analysis. Each aPANC analyzes two input values of Degrees of Evidence. The result of this analysis is a value of a unique Resultant Degree of Evidence μE obtained by the PAL2v equations. This single output value, on its turn, will be the new Degree of Evidence, which will be analyzed in other cells. The Analytical Paraconsistent Artificial Neural Cell (aPANC) is the link that allows different regions of the Paraconsistent Neural Network to perform signal processing in a distributed fashion and by means of a number of parallel connections. Since this cell is created from a Standard Paraconsistent Artificial Neural Cell, it utilizes all the concepts and the considerations studied before. The exclusive use factors like: the Contradiction Tolerance Factor and the Certainty Tolerance Factor are included in its configuration. In the aPANC, when the Certainty Tolerance Factor is adjusted to its maximum value, CerTF =1, the Resultant Degree of Evidence, in the cell output, is obtained through the PAL2v equations. When the Certainty Tolerance Factor is adjusted to a low value; that is, close to 0, the value of the Degree of Evidence undergoes greater restrictions to be considered as output. With the Certainty Tolerance Factor CerTF adjusted to zero, any result of the analysis carried out by the cell is considered indefinite. In the Paraconsistent Artificial Neural Network (PANNet), the different values of the Certainty Tolerance Factor will act on the various cells of analytical connection, inhibiting or releasing regions, according to the characteristics and the purposes of the analyses performed. We may obtain a mathematical model to construct the algorithm of the Paraconsistent Artificial Neural Cell of analytical connection through the following equations: Consider the input variable: μRA, such that: 0 ≤ μRA≤ 1 μRB, such that: 0 ≤ μRB ≤ 1 The limit values: CtrTF - Contradiction Tolerance Factor, such that: 0 ≤ CtrTF ≤ 1 CerTF - Certainty Tolerance Factor, such that: 0 ≤ CerTF ≤ 1 Considering the complement of the second variable: μRBC = 1 - μRB = λ2 The first variable μRA is called μ1. Considering that the Contradiction Tolerance Factor is active, therefore CtrTF≠1, the output Degree of Evidence will be calculated through equation (4.2) reproduced below: (µ - λ ) + 1 µE = 1 2 2 The Normalized Degree of Contradiction will be calculated through equation (4.2) reproduced below: µctr =
µ1 + λ2 2
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 8. Paraconsistent Artificial Neural Cell Family
193
The value of the Resultant Interval of Evidence is calculated through equation (4.7) φE = 1- |2 μctr – 1 | Certainty Tolerance Factor CerTF the limits are initially calculated through: 1 + CerTF 1 - CerTF CerCSV = and CerCIV = (7.2) 2 2 Where: CerCSV = Certainty Control Superior Value. CerCIV = Certainty Control Inferior Value For CerTF ≠ 1 If: CtrCSV > μctr > CtrCIV and CerCSV ≤ μE ≤ CerCIV , means there isn’t a high Degree of Contradiction considered, therefore the output will be computed by the equation of the Resultant Degree of Evidence, and the output S1 = μE. If none of the above conditions happen, output S1 must present an Indefinition equal to 0.5. In any of the two cases, the output of the Interval of Evidence present its calculated value. Figure 8.1 presents the simplified symbol of the Analytical Paraconsistent Artificial Neural Cell (aPANC) with all its input and output variables. μ 1A
μ1B μ1A
aPANC
μ1B
C C
Paraconsistent Analysis T
CtrTF
CerTF Copyright © 2010. IOS Press, Incorporated. All rights reserved.
F
CtrTF
aPANC S2 =
φE
CerTF
t
S2= φE ⊥
S1 = μE
S1= μE Figure 8.1 Representation of an Analytical Paraconsistent Artificial Neural Cell (aPANC).
The Algorithm presented next describes the functioning of an Analytical Paraconsistent Artificial Neural Cell (aPANC). This aPANC algorithm is the same as the Standard Paraconsistent Artificial Neural Cell (sPANC). It only brings the outlines utilized in this configuration.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
194
Chapter 8. Paraconsistent Artificial Neural Cell Family
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
8.2.1 Algorithm of the Analytical Paraconsistent Artificial Neural Cell (aPANC) 1- Enter the value of the input Degree of Evidence 1 μ1 */ Degree of Evidence 1 0 ≤ μ1 ≤ 1 */ 2- Enter the value of the input Degree of Evidence 2 μ2 */ Degree of Evidence 2 0 ≤ μ2 ≤ 1 */ 3-Enter the value of the Contradiction Tolerance Factor CtrTF = C1 */ Contradiction Tolerance Factor 0 ≤ CtrTF ≤ 1 */ 4 - Compute the Contradiction Control Superior and Inferior Value. 1 + C1 1 - C1 and CtrCSV = CtrCIV = 2 2 5- Enter the value of the Certainty Tolerance Factor CerTF = C2 */ Certainty Tolerance Factor 0 ≤ CerTF ≤ 1 */ 6- Compute the Certainty Control Superior and Inferior Value. 1 + C2 1 - C2 CerCSV = CerCIV = and 2 2 7- Transform the Degree of Evidence 2 into Unfavorable Degree of Evidence λ2 = 1- μ2 */ Unfavorable Degree of Evidence 0 ≤ λ2 ≤ 1 */ 8- Compute the Normalized Degree of Contradiction µ + λ2 µctr = 1 2 9- Compute the Resultant Interval of Evidence φE = 1- |2μctr -1 | 10- Compute the Resultant Degree of Evidence. (µ - λ ) + 1 µE = 1 2 2 11- Present the resulting output signals from the conditionals: If: CtrCSV > μctr > CtrCIV and CerCSV ≤ μE ≤ CerCIV Then: S 1 = μE S 2 = φE */Indefinition*/ Else: S1 = 0.5 S 2 = φE 12- End
8.3 The Real Analytical Paraconsistent Artificial Neural Cell (RaPANC) The Real Analytical Paraconsistent Artificial Neural Cell (RaPANC) carries out the paraconsistent analysis presenting the output value of the Real Degree of Evidence μER. As mentioned before, μER is a value where all the inconsistency effect existing at the inputs, which was being represented by the value of the Degree of Contradiction, was removed. Therefore, there is no need to adjust the contradiction levels, thus excluding the Contradiction Tolerance Factor from the analysis process. Supposing that the Contradiction Tolerance Factor is inactive, that is, CtrTF =1.0 the output Degree of Evidence will be calculated by the Real Resultant Degree of Evidence μER. The Degree of Evidence is calculated through:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 8. Paraconsistent Artificial Neural Cell Family
195
(µ1 - λ2 ) + 1 2 The Real Degree of Evidence is obtained through: If μE > 0.5 μER = 1 - d/2 If μE < 0.5 μER = d/2 Where distance D is calculated through: µE =
d = (1− | μ − λ |)2 + ((μ + λ ) − 1)2 Considering the two outputs: S 1 = Resultant Degree of Evidence output μER S 2 = Interval of Evidence signal output φE The Certainty Limit Values: 1 + CerTF 1 - CerTF and CerCSV = CerCIV = 2 2 The values from outputs S1 and S2 are obtained through the comparisons of the conditions done as follows: If the value of the Interval of Evidence φE is greater or equal to 0.25 or d is greater or equal to the value of the Real Degree of Evidence will be Indefinition, therefore 0.5. If the above condition does not occur, then the value of the Resultant Degree of Evidence, presented at the output, will be the Real Degree of Evidence μER, which is inserted in the interval proposed by the limits of the Certainty Tolerance Factor Cer TF. In case the action of the Certainty Tolerance Factor is undesired, it is enough to make it equal to 0 and it will be inactive in the RaPANC. In either case, the output of the Interval of Evidence will present its calculated value. Figure 8.2 presents the figure and the simplified symbol of the Real Analytical Paraconsistent Artificial Neural Cell (RaPANC) with all the output and input variables.
μ 1A
μ1B
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ1A
RaPANC
μ1B
C
C
Paraconsistent Analysis T
CerTF
RaPANC S2 =
φE
CerTF F
t
S2= φE ⊥
S1 = μER
S1= μER Figure 8.2 Representation of the Real Analytical Paraconsistent Artificial Neural Cell (RaPANC).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
196
Chapter 8. Paraconsistent Artificial Neural Cell Family
As follows we present the algorithm that describes the functioning of a Real Analytical Paraconsistent Artificial Neural Cell - RaPANC. The RaPANC algorithm is the same as the Standard Paraconsistent Artificial Neural Cell. It only brings the lines utilized for this particular configuration.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
8.3.1 Algorithm of the Real Analytical Paraconsistent Artificial Neural Cell (RaPANC) 1- Enter the value of the input Degree of Evidence 1 0 ≤ μ1 ≤ 1 */ μ1 */ Degree of Evidence 1 2- Enter the value of the input Degree of Evidence 2 μ2 */ Degree of Evidence 2 0 ≤ μ2 ≤ 1 */ 3-Enter the value of the Contradiction Tolerance Factor CtrTF = C1 */ Contradiction Tolerance Factor 0 ≤ CtrTF ≤ 1 */ 4 - Compute the Contradiction Control Superior and Inferior Value. 1 + C1 1 - C1 CtrCSV = and CtrCIV = 2 2 5- Enter the value of the Certainty Tolerance Factor CerTF = C2 */ Certainty Tolerance Factor 0 ≤ CerTF ≤ 1 */ 6- Compute the Certainty Control Superior and Inferior Value. 1 + C2 1 - C2 CerCSV = CerCIV = and 2 2 7- Transform the Degree of Evidence 2 into Unfavorable Degree of Evidence λ2 = 1- μ2 */ Unfavorable Degree of Evidence 0 ≤ λ2 ≤ 1 */ 8- Compute the Normalized Degree of Contradiction µ + λ2 µctr = 1 2 9- Compute the Resultant Interval of Evidence φE = 1- |2μctr -1 | 10- Compute the Resultant Degree of Evidence. (µ - λ ) + 1 µE = 1 2 2 11 - Compute distance d d = (1− | μ − λ |)2 + ((μ + λ ) − 1)2 12- Compute the Real Resultant Degree of Evidence Do: μER = 1 - d/2 If μE > 0.5 Do: μER = d/2 If μE < 0.5 If μE = 0.5 Do: μER = μE
13- Do CtrTF =1 and present at the output If: φE ≥ 0.25 or d ≥ 1 S2 = φE */Indefinition*/ Then : S1 = 0.5 Else: S1 = μER and S2 = φE 14- End
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 8. Paraconsistent Artificial Neural Cell Family
197
8.4 The Paraconsistent Artificial Neural Cell of Simple Logical Connection (PANCSiLC) The Paraconsistent Artificial Neural Cell of Simple Logical Connection (PANCSiLC) has the function of establishing logical connectives between representative signals of Degrees of Evidence. The main logical connection cells are those that do the operation of the maximization OR and of the minimization AND. For maximization, initially, a simple analysis is done through the equation of the Degree of Evidence, which will inform which of the two input signals is of higher value. With this information, the cell representative algorithm establishes the output signal. The utilized equation and the conditions that determine the output for a maximization process are exposed as follows. Consider the input variables: μ1A, such that: 0 ≤ μ1A ≤ 1, and μ1B, such that: 0 ≤ μ1B ≤ 1. The Resultant Degree of Evidence is calculated by doing: μ1A = μ1 and λ2 = 1- μ1B (µ1 - λ2 ) + 1 µE = 2 To determine the higher value input: If: μE ≥ 0.5 ⇒ μ1A ≥ μ1B ⇒ The output is μ1A If: μE < 0.5 ⇒ μ1A < μ1B ⇒ The output is μ1B Figure 8.3 shows representative figure and the simplified symbol of PANCSiLC, which does the maximization between the two Degrees of Evidence signals μ1A and μ1B. μ 1B
μ 1A
μ1A
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
PANCSLC
Paraconsistent Analysis T Max F
t
⊥
μ1B
PANCSiLC
Max
μrMax
S1= μrMax Figure 8.3 Symbol of Paraconsistent Artificial Neural Cell of Simple Logical Connection (PANCSiLC) in the maximization process (OR).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
198
Chapter 8. Paraconsistent Artificial Neural Cell Family
For a minimization, the procedure is similar to that of maximization, with the difference that the choice falls on the lower value input. Initially, a simple analysis is done through the equations which will inform which of the two input signals is of lower value. The equation utilized and the conditions that determine the output for a minimization process are exposed as follows. If: μER ≥ 0.5 ⇒ μ1A < μ1B ⇒ The output is μ1A If: μER < 0.5 ⇒ μ1A ≥ μ1B ⇒ The output is μ1B Figure 8.4 presents the symbol of a PANCSiLC of minimization. This PANCSiLC carries out the analysis between two Degrees of Evidence signals μ1A and μ1B, and establishes the lower value as output. μ 1B
μ 1A
μ1A
PANCSLC
PANCSiLC
Paraconsistent Analysis T Min F
μ1B
t
⊥
Min
μrMin
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
S1= μrMin Figure 8.4 Symbol of Paraconsistent Artificial Neural Cell of Simple Logical Connection (PANCSiLC) in the minimization process (AND).
Next we will present the algorithm which describes the functioning of a Paraconsistent Artificial Neural Cell of Simple Logical Connection (PANCSiLC) for the two connectives OR and AND.
8.4.1 Algorithm of the Paraconsistent Artificial Neural Cell of Simple Logical Connection (PANCSiLC) 1- Enter the value of the input Degree of Evidence 1 μ1 */ Degree of Evidence 1 0 ≤ μ1 ≤ 1 */ 2- Enter the value of the input Degree of Evidence 2 μ2 */ Degree of Evidence 2 0 ≤ μ2 ≤ 1 */ 3- Transform the Degree of Evidence 2 into Unfavorable Degree of Evidence
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 8. Paraconsistent Artificial Neural Cell Family
199
λ2 = 1- μ2 */ Unfavorable Degree of Evidence 0 ≤ λ2 ≤ 1 */ 4- Compute the Normalized Degree of Contradiction µ + λ2 µctr = 1 2 5- Compute the Resultant Interval of Evidence φE = 1- |2μctr -1 | 6- Compute the Degree of Evidence (µ - λ ) + 1 µE = 1 2 2 7- Do CtrTF = 1 and CerTF = 0 and present the resulting output signals from the conditionals: For Maximization If μE > 0.5 If
μE < 0.5
Do: S1 = μ1 S2 = 0.5 Do: S1 = λ2 S 2 = 0.5
For Minimization Do: S1 = λ2 If μE > 0.5 S2 = 0.5 If μE < 0.5 Do: S1 = μ1 S 2 = 0.5 8 - End
*/Input Evidence Greater Value/ */Input Evidence Greater Value/
*/Input Evidence Lower Value/ */Input Evidence Lower Value/
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
8.5 The Paraconsistent Artificial Neural Cell of Selective Logical Connection (PANCSeLC) The Paraconsistent Artificial Neural Cell Selective Connection (PANCSeLC) is a Cell of special logical connection, which carries out the logical functions of maximization or minimization, selecting one of the signals to be connected to the output and neutralizing the other. An undefined output of 0.5 is imposed to the neutralized signal. The PANCSeLC has two inputs and two outputs, in such a way that each output corresponds to its respective input signal. When the two input signals are applied in a PANCSeLC of maximization OR, the higher value signal has free path and appears at its respective output. In these conditions, the lower value signal is held back and a value of Indefinition 0.5 appears at the output. When the cell has the function of minimization AND, the higher value signal is held back with its corresponding output, presenting Indefinition 0.5 and the lower value signal has free path through the cell. Conventionally, if the two signals have equal values, the signal applied to the right of the cell prevails over the signal applied to the left. The result of the Degree of Evidence equation defines, through the conditions, which signal of the input Degree of Evidence will have free path through the cell and which signal will be held back. Consider the input variables: μ1A, such that: 0 ≤ μ1A ≤ 1, and μ1B, such that: 0 ≤ μ1B ≤ 1. The Resultant Degree of Evidence is calculated by doing: μ1A = μ1 and λ2 = 1- μ1B
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
200
Chapter 8. Paraconsistent Artificial Neural Cell Family
µE =
(µ1 - λ2 ) + 1 2
In the maximization If: μE ≥ 0.5 ⇒ μ1 ≥ μ2 It results at the output: S 1 = μ1 and S2 = 0.5 Else: S 1 = 0.5 and S2 =μ2 In the minimization If: μE ≥ 0.5 ⇒ μ1 ≥ μ2 It results at the output: S 1 = 0.5 and S2 = μ2 Else: S 1 = μ1 and S2 = 0.5 Figure 8.5 presents the symbol and the algorithm of the Paraconsistent Artificial Neural Cell Selective Logical Connection (PANCSeLC) in the function of maximization.
μ1
μ2 PANCSeLC
μ1
Paraconsistent Analysis T
μ2
PANCSeLC
Max/Min
F
t
μ1
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
⊥
S1= μ1
Max/Min
μ2
S2= μ2
Figure 8.5 Representation of the Paraconsistent Artificial Neural Cell of Selective Logic of maximization and Minimization (Max/Min).
The Algorithm originated from the Standard Paraconsistent Artificial Neural Cell is presented as follows:
8.5.1 Algorithm of the Paraconsistent Artificial Neural Cell of Selective Logical Connection (PANCSeLC) 1- Enter the value of the input Degree of Evidence 1 μ1 */ Degree of Evidence 1 0 ≤ μ1 ≤ 1 */
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 8. Paraconsistent Artificial Neural Cell Family
201
2- Enter the value of the input Degree of Evidence 2 μ2 */ Degree of Evidence 2 0 ≤ μ2 ≤ 1 */ 3- Transform the Degree of Evidence 2 into Unfavorable Degree of Evidence λ2 = 1- μ2 */ Unfavorable Degree of Evidence 0 ≤ λ2 ≤ 1 */ 4- Compute the Degree of Evidence (µ - λ ) + 1 µE = 1 2 2 5- Do CtrTF = 1 and CerTF = 0 and present the resulting output signals from the conditionals: For maximization If: μE ≥ 0.5 ⇒ μ1 ≥ μ2 It results at the output: S 1 = μ1 and S2 = 0.5 Else: S 1 = 0.5 and S2 = μ2 For minimization If: μE ≥ 0.5 ⇒ μ1 ≥ μ2 It results at the output: S 1 = 0.5 and S2 = μ2 Else: S 1 = μ1 and S2 = 0.5 6 - End
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
8.6 Crossing Paraconsistent Artificial Neural Cell (cPANC) The Crossing Paraconsistent Artificial Neural Cell (cPANC) has the function of conducting signals to determined regions of the Network. At first, the signal that goes through the cell does not suffer any change, but it is compared to the Certainty Tolerance Factor CerFT which, depending on the adjustment, may change the output. A cPANC is basically a connection cell having its inputs interconnected where the signal may suffer the interference of the Certainty Tolerance Factor CerTF. Consider a Paraconsistent Artificial Neural Cell in which: The input variable Complement is: λ = 1 - μ1 The Degree of Evidence is calculated through: µE =
(µ1 - λ2 ) + 1 2
Consider the limit value: CerTF - Certainty Tolerance Factor, such that: 0 ≤ CerTF ≤ 1 The Input Evidence Greater Value calculated by: CerCSV =
1 + CerTF 2
and CerCIV =
1 - CerTF 2
The Resultant Degree of Evidence μEr at the output is obtained from the conditions: If: CerCSV ≤ μE ≤ CerCIV ⇒ μE = μ1 Else ⇒ μE = 0.5 Figure 8.6 presents the Crossing Paraconsistent Artificial Neural Cell (cPANC) with its simplified symbol.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
202
Chapter 8. Paraconsistent Artificial Neural Cell Family
μ1 cPANC
μ1
C
CerTF
C
Paraconsistent Analysis T F
cPANC
CerTF t
⊥ μE
μE Figure 8.6
Simplified Representation of the Crossing Paraconsistent Artificial Neural Cell (cPANC) and its simplified symbol.
The algorithm of the Crossing Paraconsistent Artificial Neural Cell is presented as follows.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
8.6.1 Algorithm of the Crossing Paraconsistent Artificial Neural Cell (cPANC) 1- Enter the value of the input Degree of Evidence 1 μ1 */ Degree of Evidence 1 0 ≤ μ1 ≤ 1 */ 2- Enter the value of the Certainty Tolerance Factor CerTF = C2 */ Certainty Tolerance Factor 0 ≤ CerTF ≤ 1 */ 3- Compute the Certainty Control Superior and Inferior Value. 1 + C2 1 - C2 CerCSV = and CerCIV = 2 2 4-Transform the Degree of Evidence1 into Unfavorable Degree of Evidence λ1 = 1- μ1 */ Unfavorable Degree of Evidence 0 λ1 ≤ 1 */ 5- Compute the Degree of Evidence (µ - λ ) + 1 µE = 1 2 2 6 – Do CtrTF =1 and present at the output If: CerCSV ≤ μER ≤ CerCIV S2 = 0.5 Then : S1 = μE */Indefinition*/ Else: S1 = 0.5 S2 = 0.5 7 - End
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 8. Paraconsistent Artificial Neural Cell Family
203
8.7 Paraconsistent Artificial Neural Cell of Complementation (PANCC) The Paraconsistent Artificial Neural Cell of Complementation (PANCC) consists of a cell, which is able to make the complement in relation to the unit of signal applied at the input. Consider a Paraconsistent Artificial Neural Cell in which: The input variable complement is: λ = 1 - μ1 The Degree of Evidence is calculated by: (µ - λ ) + 1 µE = 1 2 2 The Resultant Degree of Evidence Complement: μEC = 1 - μE Consider the limit value: CerTF - Certainty Tolerance Factor, such that: 0 ≤ CerTF ≤ 1 The Input Evidence Greater Value calculated by: CerCSV =
1 + CerTF 2
and
CerCIV =
1 - CerTF 2
The Resultant Degree of Evidence μEr at the output is obtained from the conditions: If: CerCSV ≤ μEC ≤ CerCIV ⇒ μE = 1 - μ1 Else ⇒ μE = 0.5 Figure 8.7 presents the Paraconsistent Artificial Neural Cell of Complementation with its simplified symbol. μ1 μ1
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
PANCC C
CerTF
Paraconsistent Analysis T F
CerTF
PANCC
t
⊥ μEC
μEC Figure 8.7 Algorithm and symbol of the Paraconsistent Artificial Neural Cell of Complementation (PANCC)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
204
Chapter 8. Paraconsistent Artificial Neural Cell Family
The algorithm of the Paraconsistent Artificial Neural Cell of Complementation is presented as follows.
8.7.1 Algorithm of the Paraconsistent Artificial Neural Cell of Complementation (PANCC) 1- Enter the value of input Degree of Evidence 1 μ1 */ Degree of Evidence 1 0 ≤ μ1 ≤ 1 */ 2-Transform the Degree of Evidence1 into Unfavorable Degree of Evidence λ1 = 1- μ1 */ Unfavorable Degree of Evidence 0 ≤ λ1 ≤ 1 */ 3- Enter the value of the Certainty Tolerance Factor CerTF = C2 */ Certainty Tolerance Factor 0 ≤ CerTF ≤ 1 */ 4- Compute the Certainty Control Superior and Inferior Value. 1 + C2 1 - C2 and CerCIV = 2 2 5- Compute the Degree of Evidence (µ - λ ) + 1 µE = 1 2 2 6- Compute the Complementation of the Degree of Evidence μEC = 1 - μE 7 – Do CtrTF =1 and present at the output If: CerCSV ≤ μEC ≤ CerCIV Then : S1 = μEC S2 = 0.5 */Indefinition*/ Else: S1 = 0.5 S2 = 0.5 8 - End.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
CerCSV =
8.8 Paraconsistent Artificial Neural Cell of Equality Detection (PANCED) A Paraconsistent Artificial Neural Cell of Equality Detection (PANCED) consists of a Paraconsistent Artificial Neural Cell whose main function is to compare two values of Degrees of Evidence applied at the inputs and to generate a response relative to the equality in the closed interval between 0.0 and 1.0. Thus a PANC ED is a cell that supplies a Resultant Degree of Evidence that expresses an equality factor between two values applied at the inputs. In a Paraconsistent Artificial Neural Network, the result of this comparison maybe utilized as recognition signal for a certain pattern one wishes to find or recognize in certain parts of the network. Therefore, the use of this cell is important in the function of pattern classification by PANNet. To form the PANCED, the Normalized Degree of Contradiction will be calculated and its value will be compared to the Contradiction Tolerance Factor - CtrTF. This will define three output values, as follows: a) If the comparison done with the Contradiction Tolerance Factor CtrTF results in True, it means that the signals are considered equal. The signal at the output will be 1.0, indicating that the pattern was recognized.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 8. Paraconsistent Artificial Neural Cell Family
205
b) If the comparison done with the Contradiction Tolerance Factor CtrTF results in False, it means that the signals are considered unequal. The signal at the output will be 0.0, indicating that the pattern was not recognized. Hence, the PANCED may be described by means of an algorithm through the following equations from the fundamentals of PAL2v. Consider the Degrees of Evidence applied at the inputs: μ1A, such that: 0 ≤ μ1A ≤ 1 and μ1B, such that: 0 ≤ μ1B ≤ 1 The Unfavorable Degree of Evidence calculated by: λ = 1- μ2 The limit value: CtrTF - Contradiction Tolerance Factor, such that: 0 ≤ CtrTF ≤ 1 The Normalized Degree of Contradiction will be calculated by: µ + λ µctr = 1A 2 The limit values maximum and minimum recognition computed as the limit values, Superior and Inferior Contradiction Control: 1 + CtrTF 1 - CtrTF and CtrCSV = CtrCIV = 2 2 The logical estate of output S1 is obtained through comparisons done as follows: If: μctr = 0 then: S1 = 1 */Recognized Pattern*/ Else: S1 = 0 */False*/ With these observations, a Paraconsistent Artificial Neural Cell of Equality Detection will be described as follows (PANCDE) utilizing the input and output variables together with the adjustment signal. μ 1A
μ1B
μ1
PANCED
μ2
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
C
CtrTF
C
Paraconsistent Analysis T F
CtrTF
PANCED
t
⊥
S1
S1 Figure 8.8 Representation and symbol of a Paraconsistent Artificial Neural Cell Equality Detection (PANCED)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
206
Chapter 8. Paraconsistent Artificial Neural Cell Family
The algorithm of the Paraconsistent Artificial Neural Cell of Equality Detection (PANCED) is shown as follows. 8.8.1 Algorithm of the Paraconsistent Artificial Neural Cell of Equality Detection (PANCED) 1- Enter the value of the input Degree of Evidence de 1 μ1 */ Degree of Evidence 1 0 ≤ μ1 ≤ 1 */ 2- Enter the value of the input Degree of Evidence 2 μ2 */ Degree of Evidence 2 0 ≤ μ2 ≤ 1 */ 3-Enter the value of the Contradiction Tolerance Factor CtrTF = C1 */ Contradiction Tolerance Factor 0 ≤ CtrTF ≤ 1 */ 4 - Compute the Contradiction Control Superior and Inferior Value. 1 + CtrTF 1 - CtrTF CtrCSV = CtrCIV = and 2 2 5- Transform the Degree of Evidence 2 into Unfavorable Degree of Evidence λ2 = 1- μ2 */ Unfavorable Degree of Evidence 0 ≤ λ2 ≤ 1 */ 6- Compute the Normalized Degree of Contradiction µ + λ µctr = 1A 2 7- Do CerTF = 0 and present the resulting output signals from the conditionals: If: CtrCSV > μctr > CtrCIV Then: S1 = 1 Else: S1 = 0 8 - End.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
8.9 Paraconsistent Artificial Neural Cell of Decision (PANCD) Paraconsistent Artificial Neural Cell of Decision (PANCD) has the main function of working as a decision node in Paraconsistent Analysis Artificial Neural Networks. This cell receives input two signals. These are resulting signals from the analysis performed by other cells that compose the Network. The output result will establish a conclusion of the analysis. Thus, a PANCD will only present one of the three values as result of the analysis: a) Value 1, representing the conclusion “True” b) Value 0, representing the conclusion “False” c) Value 0.5, representing the conclusion “Indefinition”. The Decision Cell has one single external adjustment and, since it is originated from a Standard Paraconsistent Artificial Neural Cell, it may be described by means of an algorithm. With the presented concepts, a mathematical model of a Paraconsistent Artificial Neural Cell of Decision is developed from the equations: Consider input variables: μ1, such that: 0 ≤ μ1 ≤ 1, and μ2, such that: 0 ≤ μ2 ≤ 1 DecTF Decision Tolerance Factor such that: 0 ≤ DecTF ≤ 1 The Unfavorable Degree of Evidence is obtained through λ = 1- μ2
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 8. Paraconsistent Artificial Neural Cell Family
207
The Resultant Degree of Evidence calculated by: (µ - λ ) + 1 µE = 1 2 The Falsehood and Truth Limit Values: 1 + DecTF 1 - DecTF and TLV = FLV = 2 2 Where: TLV = Truth Limit Values FLV = Falsehood Limit Values The logical states of output S1 and S2 are obtained through the comparisons carried out as follows: If: μE ≥ TLV then: S1= 1 */True*/ If: μE ≤ FLV then: S1= 0 */False*/ Else: S1 = 0.5 */Indefinition*/ With these observations, we will describe a Paraconsistent Artificial Neural Cell of Decision utilizing the input and output variables along with the adjustment signals. The representation of a Paraconsistent Artificial Neural Cell of Decision (PANCD) with its simplified symbol is in figure 8.9. μ1
μ2
μ1
PANCD
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
DecTF
C
Paraconsistent Analysis T F
μ2
DecTF
PANCD
t
⊥
S1
S1
Figure 8.9 Representation and simplified symbol of the Paraconsistent Artificial Neural Cell of Decision (PANCD)
The algorithm of the Paraconsistent Artificial Neural Cell of Decision (PANC D) is shown as follows.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
208
Chapter 8. Paraconsistent Artificial Neural Cell Family
8.9.1 Algorithm of the Paraconsistent Artificial Neural Cell of Decision (PANCD) 1- Enter the value of the input Degree of Evidence 1 μ1 */ Degree of Evidence 1 0 ≤ μ1 ≤ 1 */ 2- Enter the value of the input Degree of Evidence 2 μ2 */ Degree of Evidence 2 0 ≤ μ2 ≤ 1 */ 3-Enter the value of The Decision Tolerance Factor DecTF = C3 */ Decision Tolerance Factor 0 ≤ DecTF ≤ 1 */ 4- Compute the Decision Superior and Inferior Value. 1 + C3 1 - C3 and TLV = FLV = 2 2 5- Transform the Degree of Evidence 2 into Unfavorable Degree of Evidence λ2 = 1- μ2 */ Unfavorable Degree of Evidence 0 ≤ λ2 ≤ 1 */ 6- Compute the Resultant Degree of Evidence (µ - λ ) + 1 µE = 1 2 2 7- Do CerTF = 0 and present the resulting output signals from the conditionals: If: μE ≥ TLV then: S1= 1 */True*/ If: μE ≤ FLV then: S1= 0 */False*/ Else: S1 = 0.5 */Indefinition*/ 8 - End.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
8.10 Crossing Paraconsistent Artificial Neural Cell of Decision (cPANCD) The Crossing Paraconsistent Artificial Neural Cell of Decision (cPANCD) has the main function of signaling maximum or minimum levels. From the comparison between the input signal and the value of the Decision Factor DecTF a representative signal of True or Undefined is sent to the output. Thus, the cPANCD will only present one of the two values as a result of the analysis: a) Value 1, representing the conclusion “True”. b) Value 0.5, representing the conclusion “Indefinition”. With a single external adjustment and since it is originated from the Standard Paraconsistent Artificial Neural Cell, the cPANCD can also be described by an algorithm. With the concepts presented, a mathematical model of a Crossing Paraconsistent Artificial Neural Cell of Decision is developed from the PAL2v equations. Consider the input variable: μ1, such that: 0 ≤ μ1 ≤ 1 The complement of the input variable is: λ = 1 - μ1 The Degree of Evidence is calculated by: (µ - λ ) + 1 µE = 1 2 DecTF - Decision Tolerance Factor, such that: 0 ≤ DecTF ≤ 1 The falsehood and truth limit values:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 8. Paraconsistent Artificial Neural Cell Family
TLV =
Where:
1 + DecTF 2
FLV =
and
209
1 - DecTF 2
TLV = Truth Limit Values
FLV = Falsehood Limit Values The logical states of output S1 and S2 are obtained through the comparisons carried out as follows: If: FLV ≥ μE ≤ TLV then: S1= 0.5 */Indefinition*/ Else: S1 = 1.0 */True*/
With these observations, we will describe a Crossing Paraconsistent Artificial Neural Cell of Decision utilizing the input and output variables along with the adjustment signals. Figure 8.10 presents the Crossing Paraconsistent Artificial Neural Cell of Decision with its simplified symbol. μ1
cPANCD
μ1
C Paraconsistent Analysis T
DecTF F
t
DecTF
cPANCD
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
⊥ μE
μE Figure 8.10 Simplified Representation of the Crossing Paraconsistent Artificial Neural Cell of Decision with its simplified symbol.
The algorithm of the Crossing Paraconsistent Artificial Neural Cell of Decision is presented as follows.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
210
Chapter 8. Paraconsistent Artificial Neural Cell Family
8.10.1 Algorithm of the Crossing Paraconsistent Artificial Neural Cell of Decision (cPANCD) 1- Enter the value of the input Degree of Evidence de 1 μ1 */ Degree of Evidence 1 0 ≤ μ1 ≤ 1 */ 2-Transform the Degree of Evidence1 into Unfavorable Degree of Evidence λ1 = 1- μ1 */ Unfavorable Degree of Evidence0 ≤ λ1 ≤ 1 */ 3-Enter the value of The Decision Tolerance Factor DecTF = C3 */ Decision Tolerance Factor 0 ≤ DecTF ≤ 1 */ 4- Compute the Decision Superior and Inferior limit Values. 1 + C3 1 - C3 and FLV = TLV = 2 2 5- Compute the Degree of Evidence (µ - λ ) + 1 µE = 1 2 2 6 – Do CtrTF = 1 and present at the output Output state S1 and S2 is obtained through the comparisons done as follows: If: FLV ≥ μE ≤ TLV then: S1= 0.5 */Indefinition Else: S1= 1 */True 7 - End.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
8.11 Final Remarks We presented in this chapter a family of Paraconsistent Artificial Neural Cells (PANCs), created from the improvement of the Standard Paraconsistent Artificial Neural Cell (sPANC), represented by its algorithm studied in the previous chapter. The PANCs were projected to present characteristics able to model certain functions of an Artificial Neural Network for decision making. Thus, each cell will obtain a distinct functional response, which, when conveniently interconnected, will compose Paraconsistent Artificial Neural Units (PANUs) with more specified functions in the network. We will see in the following chapters that the PANUs present several functions, depending on the composition and interconnections of their internal cells; they will even present characteristics similar to the function of a biological neuron. All the cells composing the family presented in this work were derived from the PAL2v equations and, therefore, are based on simple mathematical models, making their implementation easier. Along the implementation of the Paraconsistent Artificial Neural Network (PANNet) project, other cells may be created from these elementary cells that compose the family based on the same principles of Paraconsistent Logic. Many of the cells presented in this chapter are fundamental for the PANNet project. The first and the second cells presented, called Analytical Paraconsistent Neural Cell and Real Analytical Paraconsistent Neural Cell, respectively, are the most important in the family because they will have the function of interconnecting the many neural units to perform the paraconsistent analysis and accomplish a parallel function in data processing carried out by the network. When the information processed by the Analytical Connection Cells comes with a high Degree of Contradiction, an Interval of Evidence signal will indicate the contradictory characteristic of the analysis. Therefore,
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 8. Paraconsistent Artificial Neural Cell Family
211
the signal of the Resultant Interval of Evidence, despite not directly influencing the analysis, it allows the system to take measures to dilute these contradictions. The Connection Cell also receives the value of the Certainty Tolerance Factor, which will work as control of the Connection Cells, disabling or enabling them for the complete functioning of the network. Another component of the family is the Cell called Paraconsistent Artificial Neural Cell of Decision. This cell processes two signals performing paraconsistent analysis between them, and, depending on the result, it determines a final conclusion in the form represented by one of the three resulting logical states: False=0.0, True =1.0 or Undefined =0.5. Therefore, this Cell is indicated to make a final unquestionable decision, based on information that comes in the form of Resultant Degree of Evidence signals. The final decision in a Paraconsistent Neural Network may be local, that is, the decision is relative to a particular region of the Network, or even a total decision, comprehending all the analysis carried out. The Paraconsistent Artificial Neural Cell has the capacity of being transformed into a Learning Cell (lPANC), carrying as major functional characteristic, the capacity of recognizing patterns, even if the signals come impregnated with noises. The details of the functioning of a Learning Paraconsistent Artificial Neural Cell (lPANC) will be exhaustively studied in the next chapter.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Exercises 8.1 What is the function of the Analytical Paraconsistent Artificial Neural Cell (aPANC). 8.2 Describe the basic function of the Analytical Paraconsistent Artificial Neural Cell (aPANC). 8.3 Based on the algorithm, project the executable program of the aPANC, in language C, or in any other common programming language. 8.4 Give the simplified representation of an aPANC. 8.5 What is the function of the Real Analytical Paraconsistent Artificial Neural CellRaPANC)? 8.6 Describe the basic functioning of a Real Analytical Paraconsistent Artificial Neural Cell (RaPANC). 8.7 Based on the algorithm, project the executable program of the RaPANC in language C or another common programming language. 8.8 Give the simplified representation of a RaPANC. 8.9 What is the function of the Paraconsistent Artificial Neural Cell of Simple Logical Connection (PANCSiLC)? 8.10 Based on the algorithms, project the executable program of the Paraconsistent Artificial Neural Cells of Simple Logical Connection for maximization and for minimization, in language C or another common programming language. 8.11 Describe the function of the Paraconsistent Artificial Neural Cell of Selective Logical Connection for maximization and for minimization. 8.12 Based on the algorithms, project the executable program of the Paraconsistent Artificial Neural Cells of Selective Logical Connection for a maximization and minimization process in language C or another common programming language. 8.13 Explain the functioning of the Crossing Paraconsistent Artificial Neural Cell (cPANC).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
212
Chapter 8. Paraconsistent Artificial Neural Cell Family
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
8.14 Based on the algorithm, project the executable program of the Crossing Paraconsistent Artificial Neural (cPANC) in language C or another common programming language,. 8.15 Explain how the functioning of a Paraconsistent Artificial Neural Cell of Complementation (PANCC) is done. 8.16 Based on the algorithm, project the executable program of the Paraconsistent Artificial Neural Cell of Complementation (PANCC) in language C or another common programming language. 8.17 Describe the functioning of a Paraconsistent Artificial Neural Cell of Equality Detection (PANCED). 8.18 Based on the algorithm, project the executable program of the Paraconsistent Artificial Neural Cell of Equality Detection (PANCED), in language C or another common programming language. 8.19 Describe the functioning of a Paraconsistent Artificial Neural Cell of Decision (PANCD). 8.20 Based on the algorithm, project the executable program of the Paraconsistent Artificial Neural Cell of Decision (PANCD), in language C or another common programming language, 8.21 Describe the functioning of a Crossing Paraconsistent Artificial Neural Cell of Decision (cPANCD). 8.22 Based on the algorithm, project the executable program of the Crossing Paraconsistent Artificial Neural Cell of Decision (cPANCD), in language C or another common programming language.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
213
CHAPTER 9
Learning Paraconsistent Artificial Neural Cell
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Introduction A Learning Paraconsistent Artificial Neural Cell (lPANC) is created from the Standard Paraconsistent Artificial Cell (sPANC) presented in previous chapters. The lPANC may be trained to learn a pattern by using the paraconsistent analysis method applied through an algorithm. In practice, cells like these may be utilized in primary sensors systems, that is, sensors which have the purpose of receiving the first pieces of information and transforming these primary stimuli into electric signals, which will be treated by the network. In this case, the training is done so that the cells recognize only the of values 0.0 or 1.0; however we will see that a Paraconsistent Neural Cell, by indirect means, can recognize patterns represented by any real value between 0.0 and 1.0. In Paraconsistent Artificial Neural Networks, the Learning Cells are projected to be used as parts of memory units, or as pattern sensors in primary layers. In a model of the neural functions of the brain, for example, one can make an analogy between these primary cells and the photoreceptor cells of human sight called cones and rods. At the end of this chapter, several test results in a lPANC will be presented, where patterns of different values are applied. These procedures show that, through a simple process, which does not involve complex mathematical equations, it is possible to develop configuration of cells through PAL2v to model the behavior of Biological Neurons.
9.1 Learning Paraconsistent Artificial Neural Cell (lPANC) A Learning Cell is basically an Analytical Paraconsistent Artificial Neural Cell (aPANC), originated from the Standard Paraconsistent Neural Artificial Cell (sPANC) with the output connected to the input of the Unfavorable Degree of Evidence. To study the lPANC in details, as well as its training process, the Analytical Paraconsistent Artificial Neural Cell is initially considered without tolerance factors and no learning process. According to the paraconsistent analysis seen in the previous chapters, in these initial conditions, a cell has two inputs μ1A and μ1B, with an undefined value equal to 0.5. Consider the input variables, such that: μ1A = 0.5 and μ1B = 0.5, through the equation that determines the value of the output Resultant Degree of Evidence:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
214
Chapter 9. Learning Paraconsistent Artificial Neural Cell
µE =
Consider: We have:
μ1BC = 1 - μ1B µE =
(µ1A - µ1BC ) + 1 2
(0.5 - 0.5) + 1 2
μE = 0.5
Therefore:
This results in Indefinition with the value of the Resultant Degree of Evidence μE = 0.5. To use the Normalized Degree of Contradiction, as a value for external control, this will be calculated through: µ + µ1BC µctr = 1A 2 Consider in this case, 0.5 + 0.5 Therefore: µctr = μctr = 0.5 2 Under these conditions, a lPANC is ready to learn patterns. Figure 9.1 shows an Analytical Paraconsistent Artificial Neural Cell in the proposed conditions with the input and output values, and no tolerance factors. μ1A = 0.5 μ1B = 0.5 aPANC
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
C Paraconsistent Analysis T F
t
μctr = 0.5
⊥
μE = 0.5 Figure 9.1 A Basic Paraconsistent Artificial Neural Cell without a learning pattern.
From this virgin cell, with the value of the output Resultant Degree of Evidence equal to 0.5, a learning process is started, defined through an algorithm as follows. da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 9. Learning Paraconsistent Artificial Neural Cell
215
9.1.1 Learning of a Paraconsistent Artificial Neural Cell The Learning Cells may be trained to learn any real value in the closed interval [0,1]. Initially, the learning process of the lPANC is shown with extreme values 0.0 or 1.0, composing thus what we call primary sensorial cells. It is defined that in a Paraconsistent Artificial Neural Network (PANNet), the primary sensorial cells consider a binary digit as pattern, where the value of 1.0 is equivalent to the logical state “True” and the value 0.0 is equivalent to the logical state “False”. Therefore, the primary sensorial cells are those that will pick information straight from the sensors. In an image reception system, for example, if in the learning process, an input signal has a Degree of Evidence of value 0.0, the cell will learn that a Falsehood Value represents the pattern in that pixel. If the value 0.0 appears at the input repeated times, the Resultant Degree of Evidence from the analysis will increase gradually at the output until it gets to the value of 1.0. In these conditions, we say that the cell learned the Falsehood pattern. The same procedure is adopted if the value 1.0 is applied repeatedly at the input. When the Resultant Degree of Evidence at the cell output reaches the value of 1.0, we say that the cell learned the Truth pattern. Therefore, a Paraconsistent Artificial Neural Cell can be prepared to learn two kinds of patterns: the Truth pattern or the Falsehood pattern. Since the reload of the lPANC is done only through the Resultant Degree of Evidence, the calculus of the Normalized Degree of Contradiction will not be present in the Algorithm; however they may be added if the project needs these values to carry out external control.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
9.1.1.1 Learning Factor lF Still on the learning process of the lPANC, the Learning Factor lF is introduced, this is adjusted externally. Depending on the value of the lF, a faster or slower learning will be provided to the lPANC. In the learning process, an equation for the values of the Successive Resultant Degree of Evidence μE(k) is considered until it gets to value of 1.0. Therefore, for a value of initial Degree of Evidence μE(k) , the values μE(k+1) are obtained up to μE(k+1) =1.0. Considering a learning process of the truth pattern, the learning equation is obtained through the calculus of the Resultant Degree of Evidence equation: µE(K+1) =
{µ1 - (µE(K)C ) lF } + 1 2
where: µE(K)C = 1 - µE(K)
and 0
≤ lF ≤ 1
The cell is considered completely trained when μE(k+1) = 1.0. For a learning process of Falsehood pattern, the complementation on the Favorable Degree of Evidence is also done, and the equation becomes:
µE(K+1) = where:
{µ1C - (µE(K)C ) lF } + 1 2
µE(K)C = 1 - µE(K)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
216
Chapter 9. Learning Paraconsistent Artificial Neural Cell
µ1C = 1 - µ1
and 0
≤ lF ≤ 1
The cell is considered completely trained when μE(k+1) = 1.0. The Learning Factor lF is a real value, in the closed interval [0,1], this is attributed randomly by external adjustments. As seen from the calculus of the Resultant Degree of Evidence equation μE(k+1), the higher the value, the faster the learning of the cell. The flowchart and the learning algorithm are presented as follows.
9.1.1.2 The flowchart of the learning algorithm The learning of a lPANC is done through a training which consists in successively applying a pattern at the input of the Favorable Degree of Evidence signal until the contradictions diminish and the Resultant Degree of Evidence equal to 1.0 is obtained at the output. Figure 9.2 presents the flowchart for the learning of the Truth pattern by a lPANC.
0
Start μE = 0.5
Start with a Virgin Cell
LF = C1
Enter the Learning Factor Value
≤ C1 ≤ 1
Complement the Unfavorable Degree of Evidence
μ2C = 1 - μ2
Connect the Output at the Input of the Complemented Unfavorable Degree of Evidence
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ2C = μE
μ1 = 1.0
µ E(K+1) =
No
Figure 9.2
Apply the Truth Pattern
{µ1 - (µE(K)C ) lF } + 1 2
μE =1.0
Yes
Compute the Resultant Degree of Evidence
The Cell has learned the Truth Pattern
Flowchart for the learning of the Truth pattern by a lPANC
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
End
Chapter 9. Learning Paraconsistent Artificial Neural Cell
217
The learning algorithm consists of conditioning a Paraconsistent Artificial Neural Cell so that at the end of the training, it recognizes the pattern presented as Degree of Evidence at the input. According to the characteristics and the basic concepts of the Paraconsistent Annotated Logic, in the learning process of the Truth pattern, the value of the pattern applied is 1.0. The differences between the input and output values are the contradictions, which will only be annulled when the Resultant Degree of Evidence reaches the value of 1.0. Therefore, this Truth pattern when repeatedly applied the input allows that the equation for the determination of the Resultant Degree of Evidence dilutes the differences until its value gets to its maximum value of 1.0 at the output. Figure 9.3 presents the symbol of the cell and the final values at the inputs and output. μ 1 =1.0
μ2 = 1.0
lPANC C LF
Paraconsistent Analysis T F
t
⊥
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μE=1.0 Figure 9.3 Symbol of the Learning Paraconsistent Artificial Neural Cell (lPANC).
The algorithm that performs the learning of the Truth pattern, where steps 2 to 5 are extracted from the Algorithm of the Standard Paraconsistent Neural Artificial Cell sPANC, is presented as follows.
9.1.2 Algorithm of the Learning Paraconsistent Artificial Neural Cell (lPANC) (For the Truth Pattern) 1- Initial Condition μ1 = 0.5 and μE = μ2 = 0.5 2- Enter the value of the Learning Factor lF = C4 */ Learning Factor 0 ≤ lF ≤ 1 */ 3- Transform the Degree of Evidence 2 into Unfavorable Degree of Evidence λ2 = 1- μ2 */ Unfavorable Degree of Evidence 0 ≤ λ2 ≤ 1 */
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
218
Chapter 9. Learning Paraconsistent Artificial Neural Cell
4- Enter the value of the Degree of Evidence of input 1 μ1 = 1.0 */ Degree of Evidence 1 5- Compute the Resultant Degree of Evidence (µ - λ ) + 1 µE = 1 2 2 6 – Consider the conditional If μE ≠ 1.0 Do μE = μ2 and return to step 3 7 - Stop According to the characteristics of the Paraconsistent Annotated Logic, in the learning process of the Falsehood pattern, the value of the pattern applied is 0.0. The value 0.0 will be applied successively at the input as it was done for the learning of the Truth pattern; this enables the calculus of the Degree of Evidence equation to dilute the differences or contradictions between the input and the output until the Resultant Degree of Evidence at the output gets to its maximum value of 1.0. Figure 9.4 presents the flowchart of the algorithm, which develops the learning of the Falsehood pattern. Start μE = 0.5 LF= C1 0 ≤ C1 ≤ 1
Enter the value of the Learning Factor
μ2C = 1 - μ2
Complement the Unfavorable Degree of Evidence
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ2C = μE μ1 = 0.0 μ1C = 1 - 0
µ E(K+1) =
No
Figure 9.4
Start with the Virgin Cell
Connect the Output at the Input of the Complemented Unfavorable Degree of Evidence Apply the Falsehood Pattern Complement the value applied as Favorable Degree of Evidence
{µ1C - (µ E(K)C ) lF } + 1 2
μE =1
Yes
Compute the Resultant Degree of Evidence
The Cell has Learned the Falsehood Pattern
Flowchart for the learning of the Falsehood pattern by a lPANC.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
End
Chapter 9. Learning Paraconsistent Artificial Neural Cell
219
When the Resultant Degree of Evidence gets to its maximum value of 1.0, it is indicating that there is no more contradiction between the input and the output of the cell, and under these conditions, it may be considered trained for the Falsehood pattern. As seen in the flowchart of the previous figure, to make the learning process of the Falsehood pattern possible, differently from the previous algorithm, the input of the Favorable Degree of Evidence of the cell underwent the action of the complement operator. The symbol of the cell with the input signals for the analysis is shown in Figure 9.5. μ1 = 0.0
μ2 = 1.0
lPANC C LF
C
Paraconsistent Analysis T F
t
⊥
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μE =1 Figure 9.5 Symbol of the Learning Paraconsistent Artificial Neural Cell (lPANC) for the Learning of the Falsehood pattern.
The algorithm that performs the learning of the Falsehood pattern is presented as follows. A line that complements the input of the Favorable Degree of Evidence was added to the algorithm. This is also part of the Algorithm of the Standard Paraconsistent Neural Artificial Cell (sPANC).
9.1.3 Algorithm of the Learning Paraconsistent Artificial Neural Cell (lPANC) (For the Falsehood Pattern) 1- Initial Condition μ1 = 0.5 and μE = μ2 = 0.5 2- Enter the value of the Learning Factor lF = C4 */ Learning Factor 0 ≤ lF
≤ 1 */
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
220
Chapter 9. Learning Paraconsistent Artificial Neural Cell
3- Transform the Degree of Evidence 2 into Unfavorable Degree of Evidence λ2 = 1- μ2 */ Unfavorable Degree of Evidence 0 ≤ λ2 ≤ 1 */ 4- Enter the value of the Degree of Evidence of input 1 μ1 = 0.0 */ Degree of Evidence 1 5- Transform the Degree of Evidence1 into Unfavorable Degree of Evidence λ1 = 1- μ1 */ Unfavorable Degree of Evidence 0 ≤ λ1 ≤ 1 */ 6- Compute the Resultant Degree of Evidence (µ - λ ) + 1 µE = 1 2 2 7 – Consider the conditional If μE ≠ 1 Do μE = μ2 and return to step 3 8 - Stop We will see next that the characteristic of the paraconsistent analysis will allow the cell, after the learning, to recognize the pattern that was taught, even if the signals come impregnated with noises. This learning characteristic and the tolerance to noises makes the Paraconsistent Artificial Neural Cell an excellent tool for pattern recognition in decision systems in Artificial Intelligence projects.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
9.1.4 Recognition of the Pattern to be Learned A lPANC must be able to recognize any initial pattern and be ready to learn them. For example, a primary sensorial cell must first, recognize and later, learn the pattern applied at its input. As seen before, in the primary cells, by definition, only two types of pattern may be presented at the inputs. They are called: Truth pattern of value 1.0 or the Falsehood pattern of value 0.0. In a process of initial learning, the cell does not know beforehand what pattern will be learned, whether the Falsehood pattern 0.0 or a Truth pattern 1. The pattern recognition process may easily be done through an initial analysis of the calculated value of the Resultant Degree of Evidence. If, in the first calculus of the Resultant Degree of Evidence, the result is lower than the value of Indefinition 0.5, it means that the value tends to drop to a minimum value of 0.0; therefore, we conclude that the cell is not prepared to learn, but to unlearn; a process which will be studied later. In this case, the cell must undergo a process of inversion with the application of the Operator NOT, and also the application of the Complement Operator at the input of the Favorable Degree of Evidence to start the learning of a Falsehood pattern, as exposed in the algorithms. Consider a Learning Paraconsistent Artificial Neural Cell (lPANC) prepared to receive patterns, which may be Truth or Falsehood. First, an initial Resultant Degree of Evidence is calculated to perform this situation as follows. Consider Pattern Up=μ1a (Unknown Pattern) is applied at the input A of the lPANC. Once the cell is prepared for the learning, then μE(k) = 0.5. Through the equations, we have: µEi =
(Up - μ E(K) ) + 1
2 (Up - 0.5) + 1 µ Ei = 2
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 9. Learning Paraconsistent Artificial Neural Cell
221
We verify that: If: Up = 1.0 μEi = 0.75 Therefore, μEi >0.5, it means that the pattern is Truth. If: Up = 0.0 μEi = 0.25 therefore, μEi < 0.5, it means that the pattern is Falsehood. In this way, the following conditionals must be included in the learning algorithm: If μEi > 0.5, then it is the Truth pattern. Calculate through the equation: {µ1 - (µE(K)C ) lF } + 1 µE(K+1) = 2 where: µE(K)C = 1 - µE(K) and 0 ≤ lF ≤ 1 If μEi < 0.5, then it is the Falsehood pattern. Calculate through the equation: {µ1C - (µE(K)C ) lF } + 1 µE(K+1) = 2 where: µE(K)C = 1 - µE(K) µ1C = 1 - µ1
and 0 ≤ lF ≤ 1
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
9.1.5 Unlearning of a Paraconsistent Artificial Neural Cell Even after a lPANC is trained to recognize certain pattern, if a totally different value is repeatedly applied at the input, a very high Degree of Contradiction will make the cell gradually “forget” the learned pattern. In these conditions, we say that it is in a process of unlearning. By applying values and using the algorithms, we note that, the repetition of the new value applied at the input, as a first consequence, decreases the Resultant Degree of Evidence. In these conditions, it means that the analysis reached the level of Indefinition. If this new value is repeatedly applied at the input, the Resultant Degree of Evidence, at the output, will decrease until it gets to the minimum value zero. From the theoretical concepts of the Paraconsistent Annotated Logic, this means that a paraconsistent analysis is attributing a null Degree of Evidence to the Proposition, which was initially learned. Consequently, the cell is attributing a maximum value to the logical negation of this initial Proposition; therefore, the new pattern, contrary to the learned pattern, must be confirmed. In the training algorithm of the lPANC, the situation of total unfavorable evidence to the initial Proposition is signalized when the Resultant Degree of Evidence reaches 0.0. The monitoring of the Normalized Degree of Contradiction μctr may also be done because when its value reaches 1.0, the logical negation of the proposition is confirmed. In either case, the procedure after the confirmation will be the application of the Operator NOT in the cell because, according to the basic concepts of the PAL2v, its action will invert the output Resultant Degree of Evidence μE. When the change of the Resultant Degree of Evidence of the cell occurs, from value 0.0 to 1.0, the lPANC will consider the value that appeared repetitively as a new learned pattern, disregarding the pattern learned previously.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
222
Chapter 9. Learning Paraconsistent Artificial Neural Cell
9.1.5.1 The Unlearning Factor ulF According to what was seen before, the analysis of the Normalized Degree of Contradiction shows us whether the cell is in learning or in unlearning process. Thus, there will be the possibility of introducing an Unlearning Factor ulF in the algorithm. For example, if in the learning process of the Falsehood pattern, the output Resultant Degree of Evidence gets to the maximum value, that is, when μE(k+1)=1.0, it means that the cell is completely trained and any variation that might happen in the output value is an unlearning. As lF was utilized in the learning process, the ulF is utilized in the unlearning process. The calculus of the Resultant Degree of Evidence equation is: µE(K+1) =
{µ1C - (µE(K)C ) ulF } + 1
2 where: μ1c = 1- μ1, μE(k)C = 1- μE(k) and 0 ≤ ulF ≤ 1
For the unlearning process of the truth pattern, the same procedure is done, that is, when μE(k+1) = 1.0, the equation utilizes the Unlearning Factor:
µE(K+1) =
{µ1 - (µE(K)C ) ulF } + 1
where: μE(k)C = 1- μE(k) and
2
0
≤ ulF ≤ 1
As seen in the algorithm, the Unlearning Factor ulF may correspond to a real value, inserted in the closed interval [0,1]. With this, a Learning Paraconsistent Artificial Neural Cell, in certain conditions, may be adjusted externally to have speed or slowness in its unlearning process.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
9.1.5.2 Confirmation of the learning of the negated proposition When Degree of Evidence results in μE(k+1)= 0.0, in the Unlearning process, of both the Falsehood pattern and the Truth pattern, it means that the cell unlearned totally the pattern learned initially and; therefore, the Operator NOT is applied, that is, μE(k+1) = 1.0. As seen in the learning process, to confirm the new pattern, the complement of μ1A is done as follows. 9.2 Studies on the Complete Algorithm of the lPANC with Learning and Unlearning A complete algorithm, which performs the learning and the unlearning, may be constructed utilizing all the adjustment conditions of the lPANC. This algorithm widens the analysis, bringing greater efficiency and controllability. The flowchart of figure 9.6 demonstrates clearly, all the process that comprehends the pattern recognition to be learned, involving the conditions of learning and unlearning of the lPANC.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 9. Learning Paraconsistent Artificial Neural Cell
Start μE = 0.5
0
µEi =
Start with a Virgin Cell
μi = pI ≤ pI ≤ 1
Enter the Initial Pattern Value
pI - 0.5 + 1 2
Compute the Initial Degree of Evidence
μEi > 0.5
lF = C1
Yes
0
Enter the Learning Factor Value
≤ C1 ≤ 1
No
Complement the Unfavorable Degree of Evidence Input
μ2C = 1- μ2 Yes
223
μEi = 0 Connect the Output to the Unfavorable Degree of Evidence Input
μ2C = μE Complement the Input Pattern
No
μr = 1 - PI
µ E(K+1) =
{µ1 - (µ E(K)C ) lF } + 1
Compute the Resultant Degree of Evidence
2
Enter the new Value of The Pattern
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Yes
μE = 0 ?
No
μE = 1 Enter the new value of the Pattern
μi = P n 0 ≤ Pn ≤ 1
No
μE = 1 ?
μi = Pn 0 ≤ Pn ≤ 1
Yes
Complement the new Value of the Pattern
μi = P n 0 ≤ Pn ≤ 1
Enter the New Value of the Pattern
μi =1 - Pn Enter the Unlearning Factor Value
0
ulF = C1 ≤ C1 ≤ 1
Figure 9.6 Flowchart of the learning and unlearning process of the lPANC.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
224
Chapter 9. Learning Paraconsistent Artificial Neural Cell
Through the adjustments in the two values, the Learning Factor - lF and the Unlearning Factor - ulF, a Paraconsistent Artificial Neural Cell can acquire speed in learning and slowness in the unlearning process or vice-versa. These adjustment conditions provided by the PANC are very important the offer greater dynamism in modelings of the Paraconsistent Artificial Neural Cell. Next, the complete algorithm is shown, with the symbol of the cell and its characteristics.
9.2.1 Complete Algorithm of the Learning Paraconsistent Artificial Neural Cell (lPANC) 1- Start: μE = 0.5 */Virgin Cell 2- Enter the Initial Input Pattern: */ 0 ≤ Pi ≤ 1 */ μi = P i 3- Compute the Initial Degree of Evidence µEi =
pI - 0.5 + 1 2
4- Determine the pattern to be learned through the conditions: If μEi = 0.5 then go to item 1 (Start) If μEi > 0.5 then go to item 6 Else, go to next item 5- Complement the Favorable Degree of Evidence input: μ1 = 1 - Pi 6- Enter the learning factor LF: */ 0 ≤ lF ≤ 1 */ C 4 = lF 7- Connect the cell output to the Unfavorable Degree of Evidence input μ2 = μ E
8- Apply the Complement Operator to the Unfavorable Degree of Evidence input: μ2C = 1 - μ2 */preparing the learning cell*/ Copyright © 2010. IOS Press, Incorporated. All rights reserved.
9- Compute the Degree of Evidence µE =
{µ1 - (µ 2C ) C 4 } + 1 2
10- Determine the next step by the conditions: If μE = 1 then: Consider the cell was trained with the Truth pattern and go to item 14 If 0 < μE < 1 then: Go to item 15 If μE = 0 then: Consider the cell was trained with the Falsehood pattern and go to next item. 11- Do the logic negation at the output: μE = 1 12- Do the complement at the Favorable Degree of Evidence input: μ1 = 1 – Pi
13- Change the lF by the Unlearning Factor ulF: C1 = ulF 14 - Enter the new input pattern: */ 0 ≤ Pn ≤ 1 */ C2 = Pn 15- Return to step 7
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 9. Learning Paraconsistent Artificial Neural Cell
225
The complete learning algorithm of a lPANC provides a Paraconsistent Artificial Neural Cell the capacity to learn and unlearn any of the patterns or values in the closed real interval [0,1]. All the steps may be provided from the Algorithm of the Standard Paraconsistent Neural Artificial Cell (sPANC) studied in the previous chapters. We note that, up to item 4, the cell carries out an initial analysis and detects which pattern to be learned. According to the result of this initial analysis, the cell is prepared for the learning of either the Falsehood pattern, or the Truth pattern. In items 6, 7 and 8, the cell is prepared for learning utilizing the Learning Factor lF. In items 9 and 10, the learning process occurs. In item 10, the conditions determine: a) whether the cell learned; b) whether it is in the learning/unlearning process; c) whether it completely unlearned the learned pattern. For each situation, the conditions lead the process to obtain a new input pattern with or without changes of the Learning Factor lF by the Unlearning Factor ulF. If the total unlearning is confirmed by the null Resultant Degree of Evidence, then, in item 11, the unlearning confirmation is started. Thus the analysis is changed, transforming from total unfavorable evidence to total favorable evidence to the logical negation of the initial pattern learned. This analysis process is kept working cyclically by the return actions of the algorithm. We will see some learning and the unlearning characteristics of the lPANC, through test results.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
9.3 Results obtained in the training of a Learning Paraconsistent Artificial Neural Cell (lPANC) We saw that a Learning Cell is trained to learn the Truth or Falsehood patterns. For the learning of the Falsehood pattern the following equation is used: {µ1C - (µE(K)C ) lF } + 1 µE(K+1) = 2 where: μ1c = 1- μ1 , μE(k)C = 1- μE(k) and 0 ≤ lF ≤ 1. And for the learning of the Truth pattern the following equation is used: {µ1 - (µE(K)C ) lF } + 1 µE(K+1) = 2 where: μE(k)C = 1- μE(k) and 0 ≤ lF ≤ 1. When the cell starts an unlearning process, a reduction of the value of the Resultant Degree of Evidence signal indicates this condition. When this reduction is detected an Unlearning Factor ulF is utilized in the equation. In this condition the equation for the unlearning of the Falsehood pattern is as follows: {µ1C - (µE(K)C ) ulF } + 1 µE(K+1) = 2 where: μ1c = 1- μ1 , μE(k)C = 1- μE(k) and 0 ≤ ulF ≤ 1. The equation for the unlearning of the Truth pattern is: {µ1 - (µE(K)C ) ulF } + 1 µE(K+1) = 2 where: μE(k)C = 1- μE(k) and 0 ≤ ulF ≤ 1. The following tests will show how, and in what conditions the inclusion of distinct factors in the learning and in the unlearning will give conditions for a
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
226
Chapter 9. Learning Paraconsistent Artificial Neural Cell
Paraconsistent Artificial Neural Cell (PANC) to quickly learn, and slowly forget, the pattern in training. 9.4 Training of a lPANC with the maximum values of the Learning (lF) and Unlearning (ulF) Factors
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
When the Learning and Unlearning Factors are adjusted to their maximum values (lF=1.0 and ulF=1.0), we say that the PANC has a natural capacity for learning and unlearning. We will see that this natural capacity will increase as the adjustments of the lF and the ulF get close to 0.0. The complete learning and unlearning algorithm was applied in a Learning Paraconsistent Artificial Neural Cell using patterns with binary values at the input and unitary values of Learning and Unlearning Factors. The values obtained are in figure 9.7.
Figure 9.7 Table with the resulting application of the complete algorithm of the lPANC
Initially, value 1.0 was successively applied representing the Truth pattern at the cell input, until the Resultant Degree of Evidence, at the output, got to value 1.0, meaning that the learning of the Truth pattern was completed. Next, the Falsehood pattern, represented by value 0.0, was applied repeated times until the cell output presented a Resultant Degree of Evidence 0.0, indicating that the cell unlearned the
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 9. Learning Paraconsistent Artificial Neural Cell
227
Truth pattern and, at the same time, it considers the logical negation of the Truth as the new pattern; therefore, the Falsehood pattern. The next step of the algorithm is the application of the Operators NOT and Complement through which the learning of the Falsehood pattern is then confirmed. The values of the Degrees of Evidence obtained from the paraconsistent analysis performed by the complete learning algorithm may be visualized in the graph of figure 9.8.
μE
lF = 1.0
ulF = 1.0
1.0 0.93 0.87 0.75 0.5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 24
steps
0.25 0.125 0.039 0.0
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 9.8 Graph representing the values obtained in the learning and unlearning process of the lPANC.
In this graph the results of the learning and the unlearning of the Paraconsistent Artificial Neural Cell presents a curve with monotonicity of the values of the Resultant Degree of Evidence, before it enters saturation. We verify that the results have the characteristics of the activation function of Artificial Neurons utilized in Classical Artificial Neural Networks. We will see that the activation functions are used to model the action potential, which is the information exchange process in the form of voltage pulse of the Biological Neurons.
9.4.1 Simplified Representation The Learning Paraconsistent Artificial Neural Cells are represented by simplified symbols in the interconnections of the Paraconsistent Neural Networks presented in figure 9.9. The graph shows the behavior of the output signal in the application of a repetitive pattern at the input. At the instant t0 the output is an Indefinition with a Resultant Degree of Evidence 0.5. From instant t0 to instant t1 a monotonic behavior happens at the output, so that then, the Resultant Degree of Evidence remains constant at instant t2. From instant t3 to instant t4, the application of the inverse pattern occurred at the input and consequently the output signal decreased to 0.0, so that the confirmation of the new learned pattern occurred at instant t4.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
228
Chapter 9. Learning Paraconsistent Artificial Neural Cell
PA
C
C
lPANC
μE(k+1) 1.0
lF ulF
0.5
t/F
t0
t1
t2
t 3 t4
Kn
0.0
μE(k+1)
Figure 9.9 Simplified symbol and the characteristic output graph of the Learning Paraconsistent Artificial Neural Cell (LPANC).
These results indicate that the application of the paraconsistent analysis equation resulted naturally in values similar to those obtained by the activation function of the Perceptron, so that/thus a Paraconsistent Neural Network may present good results, in a simpler and more practical way.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
9.4.2 lPANC tests with variations in the values of the Learning lF and Unlearning ulF Factors We will now present a study of the cell behavior when variations of the Learning Factor lF are made. These test results may be extended to a variation in the adjustments of the Unlearning Factor ulF. The tests were carried out with significant values capable of representing the PANC behavior when variation of the Learning Factor lF occurs. It is verified that in the application of an input pattern of unitary value, when the value of the Learning Factor is minimum lF = 0.0 (or when ulF = 0.0) the cell has the maximum capacity of learning (or unlearning). In these adjustment conditions, the Resultant Degree of Evidence reaches its maximum value 1.0 in the first application of the pattern in a learning process and its minimum value 0.0, in a unlearning process. With the purpose of verifying the influence of the Learning Factor on a PANC, the results for several pattern applications were calculated. The values are presented in the table and in the graph of figure 9.10. Values above 1.0 or below 0.0 are not admitted, according to what was defined in the study of the Paraconsistent Annotated Logic (PAL), by the fundamentals and methodology of the PAL2v applied to the cell signal as well as to a Paraconsistent Artificial Neural Network. This must be valid for any adjustments in the Paraconsistent Artificial Neural Cells. However, from the learning equation we verify that, if a value of the Learning Factor was, for example, lF = 2, this would make the cell lose the learning capacity and the value of the Resultant Degree of Evidence would always have the value of Indefinition 0.5.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 9. Learning Paraconsistent Artificial Neural Cell
lF = 0.5
μE
lF = 0.755
229
lF = 1.0
1.0
lF = 0
Factor
Characteristic
lF lF lF lF lF
Step Function Sigmoid Function Sigmoid Function Sigmoid Function Sigmoid Function
=0 = 0.25 = 0.50 = 0.75 = 1.00
lF = 1.00 – Natural Learning
lF = 0.25 0.5
K
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 9.10 Table and graph of the test results with variations of the Learning Factor FA.
9.4.3 lPANC tests with applications of several patterns of different values and maximum Learning Factor In the training process, a Learning Paraconsistent Artificial Neural Cell recognizes, learns and unlearns patterns, which may be of two kinds: The Falsehood pattern represented by value 0.0 and the Truth pattern represented by 1.0. When a lPANC is trained with the Truth pattern, it responds to the following equation: µE =
where:
μ1BC = 1- μ1B
{µ1A - (µ1BC ) lF } + 1 2 considering: 0 ≤ lF
≤ 1.
After the lPANC is trained, when pattern μ1A=1.0 is applied, the condition obtained in the training considers the value μ1B as being the same as the output μE=1.0. If we apply an input pattern with a value different from 1.0, as this value is presented
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
230
Chapter 9. Learning Paraconsistent Artificial Neural Cell
repetitively at the input, the lPANC will compute the values through the equation and respond with an output signal, which will gradually equal the value applied. In the same way, when a lPANC is trained with the Falsehood pattern, it will be capable of responding to the equation: {µ1AC - (µ1BC ) lF } + 1 2 where: μ1AC = 1- μ1A considering: 0 ≤ lF
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
µE =
≤ 1.
In a lPANC trained to detect the Falsehood pattern, when the values different from 0.0 are presented, the output result will be the complement of the input signal, after several applications. This occurs due to the complement that the input signal μ1A receives in the equation. The table in figure 9.11 shows the values obtained from two lPANC where one was trained to recognize, learn, and unlearn the Falsehood pattern and the other was trained for the Truth pattern, with applications of significant values between 0.0 and 1.0. A Learning Factor lF equal to 1.0 was considered to obtain the values exposed in figure 9.11. The values found demonstrate that the Learning Paraconsistent Artificial Neural Cell (lPANC) is able to recognize patterns even if the signals come impregnated with noises. One of the advantages of using the Learning Paraconsistent Artificial Neural Cell is the possibility of external adjustments for optimization, providing models of Paraconsistent Neural Networks a similar functioning to human brain. Tests done with computational tools allow the analysis of the lPANC to be performed with patterns values different from 1.0, where we verify that there will only be total learning of the cell when the Learning Factor lF is equal to 1.0. This means that in the development of Paraconsistent Artificial Neural Cell projects, in some cases, it will be preferable to have normalized values. With this procedure, we will obtain a total efficiency of the lPANC, in its learning/unlearning function. In this case, after the signal treatment, the unnormalized signal will be recovered, obtaining the final result of the analysis. The graphs obtained from these tests show that when the Learning Factor is adjusted to its maximum value lF = 1.0, the number of necessary applications or cycles to make the lPANC totally learn the pattern is around 10, regardless of the value of the pattern applied at the input. Still, in relation to the number of steps or applications of the input patterns, according to the values in the table, we verify that when the Truth pattern is applied, the following approximate values are obtained where K= n number of pattern applications: Learning Factor lF = 1.00 lF = 0.75 lF = 0.50 lF = 0.25
Number of applications necessary n = 12 n= 9 n= 7 n= 5
In practice, when there is a guarantee that the pattern value will not change, one may use these results to train the cell without the need to monitor the value of the output Resultant Degree of Evidence. Thus, in the learning algorithm, only the
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 9. Learning Paraconsistent Artificial Neural Cell
231
Learning Factor will be utilized. This will establish the verification by the number of times the pattern is applied, because it is guaranteed that the lPANC completely learned when it gets to this number of applications.
Steps
pattern μ1A
lPANC Trained for Truth pattern
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μE = (μ1A - μ1BC ) +1} ÷ 2
1 1 2 3 4 5 6 7 8 9 10 11 12 1 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 10 11 12 13
0.00 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 1.00
0.000000000 0.625000000 0.437500000 0.343750000 0.296875000 0.273437500 0.261718750 0.255859375 0.252929687 0.251464843 0.250732420 0.250366200 0.250000000 0.750000000 0.625000000 0.562500000 0.531250000 0.515625000 0.507812500 0.503906250 0.501953125 0.500976562 0.500000000 0.875000000 0.812500000 0.781250000 0.757812500 0.753906250 0.751953125 0.750976566 0.750488250 0.750244140 0.750122070 0. 750061035 0.750000000 1.000000000
lPANC Trained for Falsehood pattern μE = {(μ1AC - μ1BC ) +1} ÷ 2
1.0000000000 0.8750000000 0.8125000000 0.7812500000 0.7578125000 0.7539062500 0.7519531250 0.7509765662 0.7504882500 0.7502441400 0.7501220700 0.7500610350 0.7500000000 0.7500000000 0.6250000000 0.5625000000 0.5312500000 0.5156250000 0.5078125000 0.5039062500 0.5019531250 0.5009765620 0.5000000000 0.6250000000 0.4375000000 0.3437500000 0.2968750000 0.2734375000 0.2617187500 0.2558593750 0.2529296870 0.2514648430 0.2507324200 0.2503662000 0.2500000000 0.0000000000
Figure 9.11 Table with the resulting values from the application of different patterns in a learning Cell.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
232
Chapter 9. Learning Paraconsistent Artificial Neural Cell
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
9.5 Final Remarks A Paraconsistent Artificial Neural Network (PANNet), just like the traditional Artificial Neural Networks is modeled to process signals, inspired by the functioning of the human brain. To establish this function, we have the Learning Paraconsistent Artificial Neural Cell (lPANC) in the PANNet that, when activated through a learning algorithm, presents signals of identical characteristics to positive and symmetric sigmoid functions as response, utilized in the artificial neuron - Perceptron. Besides being represented by very simple algorithms, the Paraconsistent Artificial Neural Cells (PANCs) has the advantage of providing conditions for external adjustments through the variation of the Learning lF and Unlearning ulF Factors. The possibility of the adjustment variation enables the cell to give different answers through the changes of its functional characteristics. This adjustment provides a variation in the response speed and variation of the waveform of output signal, which goes from a characteristic of step function to a monotonic function, with or without symmetry. It was seen that through an algorithm that utilizes a repetition process in the pattern applications, a lPANC might be trained to learn and unlearn signals. In the tests, where several significant values were utilized, it was seen that the Learning Paraconsistent Neural Cells can learn and unlearn pattern recognition established by any real value between 0.0 and 1.0. The capacity to learn, unlearn and store patterns is a very important characteristic of Paraconsistent Cells and they were well defined by the results presented in the tests with a learning algorithm. The values found prove the efficiency of the Paraconsistent Neural Cells and enable them to compose Paraconsistent Neural Networks in real and concrete applications. It must be highlighted that the results obtained in the tests and exposed in the graphs and the tables prove that the Learning Paraconsistent Artificial Neural Cell (lPANC) has a functioning similar to the behavior of the Biological Neurons; therefore, it conveys a good tool for the modeling of the human brain through the Paraconsistent Artificial Neural Network (PANNet). The results show that the lPANC output characteristics, besides strongly resembling the artificial neuron models presented in the studies of classical Artificial Neural Networks, have the advantage of not using complex mathematical equations, which would restrict their applications. This easy representation, allows Paraconsistent Neural Units to be created, inspired in the models of classical Artificial Neurons. The learning algorithm, which utilizes the results obtained in the last graph, suggests an interesting way to use the lPANC, where the monitoring of the number of applications of the input patterns is done. In the following chapters, we will study the use of this form to build a computational program for lPANC training inserted in Paraconsistent Artificial Neural Cell models.
Exercises 9.1 What are Learning Paraconsistent Artificial Neural Cells used for? 9.2 Describe the fundamental configuration of the Learning Paraconsistent Artificial Neural Cell (lPANC). 9.3 What is considered a Truth pattern in the training process of a Learning Paraconsistent Artificial Neural Cell (lPANC)?
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 9. Learning Paraconsistent Artificial Neural Cell
233
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
9.4 What is considered a Falsehood pattern in the training process of a Learning Paraconsistent Artificial Neural Cell (lPANC)? 9.5 Describe the function of the Learning Factor lF in the training process of a Learning Paraconsistent Artificial Neural Cell (lPANC). 9.6 Describe the function of the Unlearning Factor ulF in the training process of a Learning Paraconsistent Artificial Neural Cell (lPANC). 9.7 Write the equation utilized for the training of a Paraconsistent Artificial Neural Cell to carry out the learning of a falsehood pattern. 9.8 What are the procedures for a Learning Paraconsistent Artificial Neural Cell (lPANC) learn a Falsehood pattern? 9.9 Write the equation utilized for the training of a Paraconsistent Artificial Neural Cell to carry out the learning of a Truth pattern. 9.10 Describe the procedures for a Learning Paraconsistent Artificial Neural Cell (iPANC) to learn a Truth pattern. 9.11 Develop the executable program in language C, or another common programming language, for the learning of the Truth pattern in the lPANC. 9.12 Develop the executable program in language C, or another common programming language for the learning of the Falsehood pattern in the lPANC. 9.13 From the complete learning algorithm of the lPANC, develop the executable program in language C, or another programming language for the learning and unlearning of the cell. 9.14 Describe what functions are obtained at the output of the Learning Paraconsistent Artificial Neural Cell (lPANC) with changes in its Learning Factor lF. 9.15 Draw the graphs of signal variation obtained at the output of the Learning Paraconsistent Artificial Neural Cell (lPANC) with changes in its Learning Factor lF. 9.16 Comment on the results obtained through the tests of the complete learning algorithm of the lPANC. 9.17 How many applications of an input pattern of constant value of a lPANC are necessary for a complete learning?
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
234
CHAPTER 10
Paraconsistent Artificial Neural Units Introduction
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
The procedures to carry out analyses among Degrees of Evidence values, based on Paraconsistent Annotated Logic (PAL), show that the Paraconsistent Artificial Neural Cells (PANC) have a functional behavior similar to Biological Neurons classical models, like those used in Artificial Neural Networks. With the interconnection of the cells different kinds of configurations of Paraconsistent Artificial Neural Units (PANU) are obtained. They have distinct functions like connection, learning, memorization, competition, and selection, among others. These PANU always perform the treatment of representative signals of Degrees of Evidence, through paraconsistent analysis based on the equations obtained from the PAL2v methodology. The first PANU will be the Para-Perceptron which is composed of a Learning Paraconsistent Artificial Neural Cell (lPANC) interconnected with other paraconsistent cells, constituting a special Unit. Thus, we may define a PANU as small collections of Paraconsistent Artificial Neural Cells, which analyze and model electrical or logical signals inspired in the behavior of Biological Neurons. These basic interconnected Units will structure and give support to the Paraconsistent Artificial Neural Network projects we will study in the next chapters. 10.1 Para-Perceptron - The Paraconsistent Artificial Neuron We saw that the Paraconsistent Artificial Neural Cells utilize simple equations for analysis. They are easily implemented and efficient in the training process and pattern recognition. In previous chapters, the test results with the cells demonstrate the possibility of utilizing them to model the functional behavior of parts of the Biological Neuron. The Paraconsistent Artificial Neuron or Para-Perceptron, which we will study next, is a set composed of several cells, which we call Paraconsistent Artificial Neural Unit (PANU). The PANU is a configuration of several types of connected Paraconsistent Artificial Neural Cells, which presents a global functioning with characteristics similar to the familiar functions of the human brain neuron. Five types of Paraconsistent Artificial Neural Cells, already studied in the previous chapter, are utilized to compose the Para-Perceptron: 1- Analytical Paraconsistent Artificial Neural Cell (aPANC): In the Para-Perceptron, it will promote the synapse function of the Biological Neuron.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 10. Paraconsistent Artificial Neural Units
235
2- Learning Paraconsistent Artificial Neural Cell (lPANC): Due to its special characteristics, in the Para-Perceptron, it will promote the function of modeling the space and temporal principles of the stimuli, characterized as being the internal phenomenon that occurs in the cell body of the Biological Neuron. 3- Crossing Paraconsistent Artificial Neural Cell of Decision (cPANCD): From the trainings carried out by the Para-Perceptron, it will allow a signal to be triggered at the output through an activation value applied as input. 4- Paraconsistent Artificial Neural Cell of Simple Logic Connection (PANCSiLC): It will have the function of controlling the Para-Perceptron, determining the conditions for pattern learning and comparison. 5- Paraconsistent Artificial Neural Cell of Selective Logic Connection (PANCSeLC): It will have the function of controlling the output of the Para-Perceptron. It enables the mining of the pattern learned by the Learning Cell. The basic function of the Simple and Selective Logic Connection Cells is to represent the process not yet totally recognized of how the Biological Neuron accesses and controls the flow of the stored information.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
All these interconnected Paraconsistent Artificial Neural Cells will compose the Para-Perceptron, a Paraconsistent Artificial Neural Unit, which works with simple and structured equations in the Paraconsistent Annotated Logic. The five types of cells will be interconnected to model a Biological Neuron. They will develop distinct functions in an emergent way. When analyzed globally, they concretize artificially the functions of the neuron. At the end of this chapter, we present the test with a typical Para-Perceptron where its capacity of processing and accessing patterns is demonstrated with real values in a closed interval [0,1] and functional behavior very close to the Biological Neuron.
10.2 The Biological Neuron We will begin the study of a typical Para-Perceptron by making comparisons of each known function of the Biological Neuron with its corresponding function, artificially produced by the Paraconsistent Artificial Neuron. With the considerations made in this comparative study between the Biological Neuron and the Para-Perceptron, some suggestions are given for other configurations utilizing the interconnections among the Paraconsistent Artificial Neural Cells. It is known from Bioscience that the human brain is an immense and complex connection of nerve cells called neurons. With approximately 1.35 kg, the brain contains some one hundred billion neurons. This immense and intricate cell network is so complex that it hasn’t been fully understood. However, in a simplified way, we can affirm that the human brain is formed, basically by sets of cells that manipulate and process information. The nerve
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
236
Chapter 10. Paraconsistent Artificial Neural Units
cells existing in the brain, besides their normal biological function, possess properties that enable the processing and transmission of information.
Complex network of interconnected neurons where information signals flow
Figure 10.1 Human brain and neurons.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
For these functions, there are several kinds of neurons; however, a typical neuron may be divided and studied into three main parts: a) The dendritic tree, formed by dendrites, which are branched projections of a neuron whose purpose is to carry out the reception of information signals to other cells; b) The cell body or soma containing the cell nucleus and the cytoplasm whose function is to gather information received by the dendrites; c) The axon, which is a slender projection of a nerve cell; of relative uniform diameter, but may present different lengths. Figure 10.2 shows the structure of a typical Biological Neuron.
Typical Biological Neuron 1 – Axon Terminal 2 - Synapses 3 - Dendrites 4 - Axon 5 - Synapses 6 - Cell Body or Soma
Figure 10.2 Representation of a typical Biological Neuron
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 10. Paraconsistent Artificial Neural Units
237
In the human brain there is a complex network composed of billions of interconnected neurons. The information pass from one neuron to another in such a way that one interacts with ten thousand others. Among the neurons, the information is transmitted from the axons to the dendrites; therefore, the axons are transmitters and the dendrites are the receptors. The contact among the axons and the dendrites is due to structures called Synapse, regions with electrical and chemical activities capable of making the connection for the transmission of information. Roughly, Synapse may be divided into two separate parts by a region called synaptic cleft. The anterior part is composed of the axon membrane through which the information signal comes (pre-synaptic), in the form of an electrical impulse and a posterior part composed of the dendrite membrane (pos-synaptic), which will receive information. As information gets to the pre-synaptic membrane, in the form of electrical impulse, it causes vesicles to appear with chemical mediators called neurotransmitters. The neurotransmitters, by altering their three-dimensional shapes, diffuse across the synaptic cleft and activate receptors on the postsynaptic neuron, triggering a series of events for the occurrence of connection. Figure 10.3 shows the scheme of a Synapse through which the neurotransmitters reach the post-synaptic membrane through the synaptic cleft altering its electrical polarization.
Efferent Axon
Pre-synaptic Membrane
Direction of electrical impulse
Vesicles containing neurotransmitters Neurotransmitters Chemical Channel Receptors Receptor Dendrite
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Post-synaptic Membrane
Figure 10.3 Representation of connection synapse for transmission and reception of information in the form of electrical impulse between two Biological Neurons.
The biological neuron receives and sends information through synaptic connection which link the dendritic arbor to the axons, which bring information from other cells. The Synapse propagates information in a single direction which, depending on the type of neurotransmitter, can enable (Excitatory Synapse) or inhibit (Inhibitory Synapse) the formation of an action potential in the axon of the receptor neuron. The action potential is responsible for sending information in the form of electrical impulses. To generate this action potential in the axon membrane, the neurons possess physical, electrical, and chemical properties that create a depolarization or a hyperpolarization imposed by the pre-synaptic potentials in each Synapse. The action
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
238
Chapter 10. Paraconsistent Artificial Neural Units
potentials, brought by each Synapse, have different features because they are influenced by the size of the Synapse and by the amount of neurotransmitters released for each Synapse. The action potential generated in the neurons also depends on the type of Synapse received, which may be excitatory in the case of depolarization or inhibitory in the case of hyperpolarization. When the membrane undergoes a depolarization, strong enough to cross a certain triggering limit, an electrical impulse is generated. Figure 10.4 shows the wave shape of an action potential, which is an impulse generated in the axon of a typical neuron, where the duration varies from a micro to a milliseconds.
mV +50 ms 0 E0
-50
Tn E0 = Resting Impulse Potential duration time
Ta Tr Ta = Absolute refraction time (inhibited production of another action potential) Tr= Relative refraction time (permitted production of another action potential, a depolarization will be of higher intensity)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 10.4 Wave shape of an action potential generated in the axon of a typical neuron.
Two basic principles are considered in the study of a biological neuron: 1 - The space integration principle of the stimuli, where the neuron response depends on the amount of excitatory and inhibitory stimuli that reach it through the dendrites. 2 - The temporal integration principle of the stimuli, which is the membrane effect in the region of each Synapse, resulting from the storage charge phenomenon. The storing occurs due to the existence of capacitance C of the postsynaptic membrane. The capacitance gives this region the characteristics of a capacitor, which stores electrical loads whenever the Synapse is activated by an electrical impulse. In the complex electrochemical process that occurs inside the neurons, it is known that when the neurotransmitters connect to the receptors, they create conditions for a change of electrical load, thus activating parts of the genes in the cell nucleus. The genes, which are cell structures that command the production of protein in the cell body, when activated, improve the reception of information signals. Depending on the electrochemical process that influences the genes, the absorption of the information may vary, strengthening or weakening the reception. This electrochemical process of variation in the information reception, which happens in the cell nucleus, is not yet completely understood; however due to its effects, it is considered as a memorization
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 10. Paraconsistent Artificial Neural Units
239
process of the neurons. The functional characteristics that shape the two principles and other effects, like the memorization, are determined by the physical-chemical properties of each cell, therefore, each neuron presenting distinction in their membranes, Synapse and neurotransmitters will also present its own particular characteristics. These distinct and particular processes of each cell make it difficult to obtain a complete functional model in order to determine the computational characteristics of a neuron.
10.3 The Artificial Neuron The first models of artificial neurons came up in 1943 and compared the similarities among the electrochemical activities of the Biological Neuron and the Boolean functions, which treat binary signals. The first model is known as Threshold Logic Unit (TLU) and was originally proposed by McCulloch and Pitts. Figure 10.5 displays the proposed TLU, where several binary signals representing the action potentials are shown at the Input Units (Synapse).
X1
W1
X2
W2 1
∑ θ
0
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Xn
a
Y
Wn Figure 10.5 TLU scheme by McCulloch and Pitts.
A differentiated weight Wi, which indicates the strength of the Synapse, is multiplied by a binary signal at each input. The results of the multiplications are then added to produce an activation value. If the activation value exceeds a certain threshold θ, a response Y is produced at the output. We then have:
a = w1 x1 + w2 x2 + ... + wn xn
a=
n
∑w x
1 i
i =1
The TLU was utilized as a pattern linear discriminator and its binary model permitted the functioning of the brain similar to a computer, which is structured on classical or binary logic. Based on this idea, many models were developed, however due to low computational results, the researches utilizing the binary artificial neuron model -TLU were interrupted.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
240
Chapter 10. Paraconsistent Artificial Neural Units
From an improvement of the TLU, a neural network model called Perceptron was introduced by Rosenblatt. This model was a network of multiple neurons of the linear discriminator kind TLU, which received signals from the input Units. The Input Units performed a pre-processing in the signals utilizing Boolean functions. Figure 10.6 shows a typical representation of the Perceptron.
Unit 1
Input Patterns
Wi Weigh
TLU Sum
Threshold
∑
Y1
∑
Y2
∑
Y3
Unit 2
Unit 3
Unit n
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 10.6 Representation of a Perceptron.
Due to the difficulties presented by this initial model to process certain functions, the Perceptron was improved, transforming into a multiple layer network. The Perceptron started being configured disposing the neurons in several layers where the signals have a single flow, being directed from the input layers to the output layers. The input layers, composed of neurons which receive the first signals, are connected to the intermediary layers or hidden layers. The hidden layers receive the signals from the input layers, analyze them, and direct them to the output layers. The multiple layer networks improved the previous model because they brought the possibility of being trained through the use of an algorithm. The input variables may assume any real value and the output approached the Biological Neuron model through a function named activation function g(s). In Artificial Neuron models used nowadays in Artificial Neural Networks, the degree function or the sigmoid: positive and symmetric, are generally used as activation function. The sigmoid function preserves the monotonicity in a region named dynamic range. The saturation occurs beyond this range. Figure 10.7 shows a representation of a multiple layer network with the outputs of the neurons controlled by the activation function g(s). The inputs X1, X2, ..., and Xn and the weights of the connections Wi1, Wi2, ..., and Win are real positive as well as negative values. When the inputs bring characteristics of triggering off the neurons, its respective weight is positive. If the characteristic inhibits the shot, then the weight will be negative.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 10. Paraconsistent Artificial Neural Units
241
The network training process for pattern recognition is obtained through the variation of the weight connections values and of the limits. There are several algorithms for the training of Perceptron networks. Some of them utilizing the Hebbian conditioned training process. However, Classical Neural Networks present several difficulties to learn and classify patterns in simple and functional processes, which may be applied in real systems. TLU Input Layer
Wi Weights X1
X2
X3
Xn
g(s)
Sum
∑ ∑
Wi1 Weight Sum
∑ ∑
TLU Hidden Layer
g(s)
Wi2 Weights TLU Output Layer Sum
g(s)
∑
Y1
∑ Figure 10.7 Typical configuration of a multi-layer Perceptron.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Among them, we may cite the impossibility to control the weights in the hidden layers, the high number of interaction necessary to find the learning optimization point, and the large amount of memory to store the different values of the connection weights. 10.4 Composition of the Paraconsistent Artificial Neuron Para-Perceptron To compose the Para-Perceptron, five different kinds of Paraconsistent Artificial Neural Cells (PANC), studied in previous chapter, are used. Figure 10.8 presents all the symbols of the Paraconsistent Artificial Neural Cells utilized in the Para-Perceptron. The first PANC in figure 10.8 (a) is the Analytical Paraconsistent Artificial Neural Cell (aPANC), which performs the analysis of the input signals, obtaining the Resultant Degree of Evidence value. In the Para-Perceptron, the Analytical Connection Cell may be considered as responsible for the Synapse of the Biological Neuron. The second kind of PANC in figure 10.8 (b) is the Learning Cell which, due to its intrinsic characteristics, will promote the function of modeling the space and temporal principles of the stimuli. These two principles were characterized as being internal electrochemical phenomenon that will occur in the cell body of the Biological Neuron.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
242
Chapter 10. Paraconsistent Artificial Neural Units
This cell receives the Resultant Degree of Evidence from the analysis carried out by the Analytical Connection Cell and establishes an integration process starting from an indefinition of value 0.5. In the integration process, the result may increase up to a maximum value of 1.0; that’s when the cell is completely trained, meaning that it has learned a pattern. The result may start from 0.5 and reduce to a minimum value of 0.0. In these conditions of output value decrease, the cell is considered as being untrained; therefore when the result is 0.0, we say that the cell totally unlearned the pattern. When the result of the output value gets to 0.0 after a learning process, it means that the cell is logically negating the pattern as well as unlearning it.
lPANC
aPANC
CtrTF
μ1
μ1
μ1A
DecTF
S2 = φE
cPANCD
CerTF
S1=μE
(a) μ1A
μ1B
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
PANCSiLC
Max
μrMax
(d)
μE
μ1r (k+1)
(c)
(b) μ1A
μ1B
PANCSiLC
Min μrMim
(e)
μ 1A
μ 1B
PANCSeLC
μ 1A
μ 1B
PANCSeLC
Max
Min
μ1B
μ1A
μ1A
(f)
μ1B (g)
Figure 10.8 Simplified Symbols of the Para-Perceptron component PANCs.
To confirm the logic negation, the output of the cell is taken to value 1.0. In these conditions, modifications are done, internally so that the cell adapts itself to the new learning situation. The details, the functioning, and the complete learning algorithm of the lPANC were studied in the previous chapter where it was demonstrated that the lPANC can learn and unlearn through an analysis process with discrete sampling of the value applied at the input. The third kind of cell in the composition of the Para-Perceptron is the Crossing Paraconsistent Artificial Neural Cell of Decision (cPANCD) of figure 10.8 (b). This cell has the function of setting off an output maximum value when the result
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 10. Paraconsistent Artificial Neural Units
243
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
of the Learning Cell gets to a pre-determined activation value. This cell receives an input signal which can be a performance level valued between 0.0 and 1.0. This value applied at the input will perform a comparison with the Decision Tolerance Factor DecTF and a result will be presented at the output, which will establish one of the two values, True equal to 1.0, or Undefined state equal to 0.5. In the Para-Perceptron, the Crossing Paraconsistent Artificial Neural Cell of Decision (cPANCD) will function together with the Learning Paraconsistent Artificial Neural Cell (lPANC). The fourth kind of cell utilized in the Para-Perceptron is the Simple Logic Connection Cell presented in figures 10.8 (d) and (e). This cell has the function of controlling the output of the Para-Perceptron, determining the conditions of activated, therefore with the output equal to1 or deactivated with the output equal to learning value. The Simple Logic Connection Cell represents the process not totally known of how the Biological Neuron accesses and controls the flow of the stored information. The fifth kind of cell utilized is the Selective Logic Connection Paraconsistent Artificial Neural Cell which has two inputs and two outputs. This cell, which is in figures 10.8 (f) and (g), performs the logic functions of maximization or minimization. It selects one of the signals to be connected to its respective output e establishes a value of 0.5 to the other. In a cell of maximization, when the two signals are applied, the greater value signal has free passage and appears at its respective output and the lower value signal becomes undefined. When the cell has the function of minimization, the lower value signal has free passage and appears at its respective output and the greater value signal becomes undefined. When the inputs are the same value, the signal applied on the right of the cell prevails over the one on the left. The Selective Logic Connection Cell directs the signals enabling control and verification of the states of the Para-Perceptron. Each cell with their respective symbols and algorithms presents determined functions, which will represent functional parts of the Biological Neuron. These interconnected cells form a Paraconsistent Artificial Neural Unit (PANU) to create the Para-Perceptron. In a simplified comparison with the Biological Neuron, an analogy is initially done between a PANU, composed of an Analytical Connection Cell and a Learning Cell, according to figure 10.9.
Figure 10.9
Analytical Cell interconnected to a Learning Cell in comparison to a Biological Neuron.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
244
Chapter 10. Paraconsistent Artificial Neural Units
In this simple analogy, cell C1, which is an Analytical Paraconsistent Artificial Neural Cell (aPANC), receives the two Degrees of Evidence signals. They are analyzed and calculated, determining a Resultant Degree of Evidence. Cell C2, which is the Learning Paraconsistent Artificial Neural Cell (lPANC) receives the Resultant Degree of Evidence from the output of cell C1 and considers it as the initial pattern for the learning algorithm. By applying the learning algorithm, the output of cell C2 will only present maximum Degree of Evidence if there are repeated coincidences in the Degrees of Evidence applied at the inputs of the Analytical Connection Cell C 1. When the values applied at the inputs are different, these differences will be computed through equations of the Analytical Cell (aPANC) and transmitted to the Learning Cell C2, where the values are pondered according to the learning algorithm. The model presented in figure 10.9 is improved by the inclusion of the Crossing Paraconsistent Artificial Neural Cell of Decision, according to figure 10.10. μ1A
μ1B
Analytical Cell C1 CtrTF
aPANC S2 = φE
CerTF
m = μ1
lPANC
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Learning Cell C2
cPANCD
μE
a = DecTF
Crossing Cell of Decision C3
μ1r (k+1) Figure 10.10 Improvement of the artificial neuron with the inclusion of the Crossing Cell of Decision.
As the application of equal patterns at the inputs of the Analytical Connection Cell C1 is started, the Learning Cell C2 will receive equal values and enter the learning process. As the learning process develops, a gradual increase in the Resultant Degree of Evidence of C2 output occurs. This output value is applied at the input of the Crossing and Decision Cell C3, which receives a Decision Tolerance Factor as a level of external activation. While the Resultant Degree of Evidence value of Learning Cell C2 is low, it will not be enough to overcome the activation level established in the Crossing Cell C3 and the value of indefinition 0.5 will appear at the output. When the value of the
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 10. Paraconsistent Artificial Neural Units
245
Resultant Degree of Evidence, learned at the output of C2, exceeds the activation level of the Crossing and Decision Cell C3 a maximum value of 1 will appear at the output. This establishes two outputs values in the PANU; one for the Resultant Degree of Evidence, which is the learned value, and another, which is the Activated Degree of Evidence.
10.4.1 Learning algorithm with the inclusion of the Crossing Cell of Decision
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
*/ In the Learning Cell*/ 1- Start: μER = 0.5 */ Virgin cell */ 2- Enter the initial input pattern: */ 0 ≤ Pi ≤ 1 */ μEi = Pi 3 - Compute the Initial Degree of Evidence (µ - 0.5) + 1 µEri = Ei 2 4- Determine the pattern to be learned by the conditions: If μEri = 0.5 Then go to item 1 (Start) If μEri > 0.5 Do in C3 */ activation value with 0 ≤ a ≤ 1 */ a = μ1 */ Connect the output of the LPANC to the Decision μ1r (k+1) = DecFT of C3 Factor of the cPANCD*/ Follow the learning algorithm of the Truth Pattern of C2 If μEri < 0.5 Do in C3 */activation value with 0 ≤ a ≤ 1 */ a = μ1 μ1r (k+1) = DecFT of C3 */ Connect the output of the lPANC to the Decision Factor of the CPANC D*/ Follow the learning algorithm of the Falsehood pattern of C2 5- Follow the algorithm of C1 and C3
10.5 Para-Perceptron Models The interconnections among the Paraconsistent Artificial Neural Cells permit the construction of several Para-Perceptron models with special characteristics which try to offer a functioning closer to the Biological Neuron. Several control procedures may be added to the previous configuration aiming its improvement. The configuration of figure 10.11 is an example where the Simple Logic Connection Cell of maximization is connected to the outputs where it will select one of the two signals. Thus, the output of the Para-Perceptron only has a single value signal. In this configuration, if the Crossing and Decision Cell is not yet activated, it will present the Learning Cell at the output, that is, the value of the Learned Resultant Degree of Evidence. If the output level of the Learning Cell is enough to activate the
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
246
Chapter 10. Paraconsistent Artificial Neural Units
Crossing and Decision Cell, so the output will present maximum value of 1.0, signaling complete learning. μ1A
μ1B
Analytical Cell C1 aPANC
CtrTF
S2 = φE
CerTF
m = μ1
Learning Cell C2
lPANC
cPANCD
a =DecTF Crossing and Decision Cell C3
μ1r (k+1)
μE
PANCSiLC
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Simple Logic Connection Cell of maximization C4
μrMax Figure 10.11 Improvement of the artificial neuron with the inclusion of the Simple Logic Connection Cell of maximization.
Several configurations may be obtained for the construction of an artificial neuron able to model the Biological Neuron. With the inclusion of cells and new interconnections, one can compose different types of Paraconsistent Artificial Neural Units called Para-Perceptron. Another important analysis for a Paraconsistent Artificial Neural Network concerns the possibility of having several input signals in the Artificial Neuron. For a model of this kind, where a comparison of the functioning is made between the Biological Neuron and the various information receptor dendrites, several Analytical Connection Cells must be interconnected to increase the number of applied Degrees of Evidence signals to be analyzed and learned.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 10. Paraconsistent Artificial Neural Units
247
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 10.12 shows a suggestion of an expanded model with eight input values. The Analytical Connection Cells are interconnected and process pairs of Degree of Evidence values until the analysis gets to the input of the Learning Cell. Even or odd numbers of inputs may easily be configured, slightly changing the connections between the Analytical Connection Cells.
Figure 10.12 A Para-Perceptron model with expansion of inputs for the application of eight values for analysis.
10.6 Test of a typical Paraconsistent Artificial Neural Para-Perceptron A table, utilizing significant values, is presented with results of a test based on the configuration of a typical Para-Perceptron. The configuration consists of an Analytical Connection Cell interconnected to a Learning Cell. This constitutes the nucleus of the different models presented and others that might be built. Figure 10.13 presents the model of a typical Para-Perceptron used in the test.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
248
Chapter 10. Paraconsistent Artificial Neural Units
μ1B
μ1A
C
Analytical Cell C1
CtrTF
aPANC S2 = φE
CerTF
μR C
C
Learning Cell C2
lPANC
μ1r (k+1) Figure 10.13 Typical Para-Perceptron utilized in the test.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Discrete value signals: 0, 0.25, 0.5, 0.75 and 1 were considered as inputs. These values were chosen for being significant for this kind of test. The table of figure 10.14 shows the values applied at the inputs and the results obtained.
Figure 10.14 Table with test results in a typical Para-Perceptron.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 10. Paraconsistent Artificial Neural Units
249
The results obtained with these values and this simple configuration may easily be explored for conclusive considerations when more complex configurations of ParaPerceptrons are desired. It can be seen from the first line of the table that the values tend to a unit. This happens because, initially, the Learning Cell captured the Falsehood pattern as the pattern to be learned.
10.7 Other Types of Paraconsistent Artificial Neural Units (PANUs) As seen before, the Paraconsistent Artificial Neural Units constitute a mass of Paraconsistent Neural Cells conveniently interconnected to present a determined function. Each Neural Unit is composed of a small number of Paraconsistent Artificial Neural Cells whose functioning was exposed through the algorithms and equations in the previous chapters. Besides the Para-Perceptron, various other configurations may be obtained by interconnecting cells. As example, a series of Neural Units constructed with interconnected cells is shown next.
10.7.1 The Learning Paraconsistent Artificial Neural Unit with Activation through Maximization (lPANUAM) The Learning Paraconsistent Artificial Neural Unit with activation through maximization is composed of a Learning Cell and of a Simple Logic Connection Cell in a maximization process. An external signal ext applied at the input cell of maximization determines the functioning of the Unit. Figure10.15 shows a lPANUAM. μ1
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
ext C
CtrTF
C
lPANC
PANCSiLC
CerTF Max
μ1r(k+1) Figure 10.15 Learning Paraconsistent Artificial Neural Unit with activation through Maximization.
When the external signal ext has value 1.0, the input of the Complemented Unfavorable Degree of Evidence of the Learning Cell will get to value 1.0. The Learning Cell, with the value of the Complemented Unfavorable Degree of Evidence da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
250
Chapter 10. Paraconsistent Artificial Neural Units
equal to 1.0, presents the characteristics of a trained cell. If the input of the Favorable Degree of Evidence is complemented, the cell learned the Falsehood pattern. If the input of the Favorable Degree of Evidence is not complemented, the cell learned the Truth pattern. If the external signal ext has value 0.0, this means that the Learning Cell is free to perform the learning or unlearning, depending on the pattern applied at its input μ1. 10.7.2 Learning Paraconsistent Artificial Neural Unit of Control and Pattern Activation (lPANUCPA) The Learning Paraconsistent Artificial Neural Unit of Control and Pattern Activation includes a Crossing Cell, where two external signals ext1 and ext2 control the functioning of the Unit. Figure 10.16 presents a configuration of a lPANUCAP.
μ1
ext1 C
CtrTF
C
lPANC
PANCSiLC
CerTF
C
C
cPANCD
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ1r(k+1)
DecTF = ext2
m
P Figure 10.16 Learning Paraconsistent Artificial Neural Unit of Control and Pattern Activation lPANUCPA.
Signal ext1 performs the control of the Learning Cell, which receives the pattern applied at the input. When signal ext1 is in 1.0, the functioning of the cell will be the one of a trained cell, with the pattern corresponding to its configuration, according to the previous PANU. When signal ext1 is in 0.0, the cell is free to learn and unlearn patterns. Signal ext2 performs the output activation with maximum value equal to 1.0 as soon as the Learning Cell presents an output level considered as totally learned. With signal ext1 in 0.0 or in 0.0, if signal ext2 is adjusted to 1.0, this forces the Learning Cell to present its memorized pattern only when it reaches value 1.0. If signal ext2 is in 0.5, the output of the Crossing Cell will present no restriction and will show the output value of the Learning Cell, whatever its learned value.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 10. Paraconsistent Artificial Neural Units
251
10.7.3 Learning Paraconsistent Artificial Neural Unit with Instantaneous Analysis (lPANUIA) The Paraconsistent Artificial Neural Unit with Instantaneous Analysis Learning (lPANUIA) is composed of two learning cells and one analytical cell. The patterns learned by the learning cells may be of any kind, depending on the initial pattern applied at the input. Figure 10.17 shows the lPANUIA along with its simplified algorithm. PA
C
PB
C
μ
lPANC 1
LF
lPANC 2
LF
ulF
C
ulF
μ1B
μ
1- In cell 1 apply the learning algorithm in signal PA 2- In cell 2 apply the learning algorithm in signal PB
C
CtrTF
aPANC 3
CerTF
3- Compute: μctr = Sct = φE
4 – Compute: ϕ E = 1− | 2 μctr − 1|
5- Compute: μ E =
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μE
μ1A + μ1BC 2
( μ1A − μ1BC ) + 1 2
6- Continue with the algorithm of the Analytical cell.
Figure 10.17 Learning Paraconsistent Artificial Neural Unit with Instantaneous Analysis - lPANUIA.
The output signals of the Learning Cells are analyzed instantaneously by the Analytical Cell, which carries out a paraconsistent analysis through the PAL2v equations.
10.7.4 Learning Paraconsistent Artificial Neural Unit through Pattern Equality (lPANUPE) This Unit is composed of two Paraconsistent Artificial Neural Cells: one Analytical Cell and one Learning Cell. The basic function of the lPANUPE is to learn patterns when there is repetitive equality of the signals applied at the inputs A and B. In this Paraconsistent Neural Unit, the detected similarity is only with patterns of the same type, that is, repetitive equality of Falsehood patterns, 0.0 at A and 0.0 at B, or repetitive equality of Truth patterns 1.0 at A and 1.0 at B. As the values of Degrees of
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
252
Chapter 10. Paraconsistent Artificial Neural Units
Evidence μ1A and μ1B are applied at the input of the Analytical Cell, the Learning Cell will totalize the results of the analysis. If there are repetitive coincidences of patterns equal to zero or equal to 1.0, the output of the Learning Cell will tend to 1.0, detecting thus the equality of patterns between the two points. The occurrence of dissimilar patterns at the input results in the gradual decrease of the evidence signal value at the output. Therefore, the equality is only confirmed when the Resultant Degree of Evidence at the output gets to 1.0. Figure 10.18 shows the lPANUPE and its simplified algorithm.
PA
PB C
CtrTF
aPANC 1
φE
1-Compute: P1BC = 1 − P1B 2- Compute: μctr =
μE
4 – Compute
C
LF
P1A + P1BC 2
C
lPANC 2
ulF
ϕ E = 1− | 2 μctr − 1|
3- Compute: μ E =
( P1A − P1BC ) + 1 2
4- Apply the Algorithm of the Analytical Cell in C1 5- Perform the learning process of cell 2 utilizing pattern P=μE
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μER Figure 10.18 Learning Paraconsistent Artificial Neural Unit through Similarity of Patterns.
10.7.5 Learning Paraconsistent Artificial Neural Unit through Repetition of Pattern Pairs (lPANURPP) This unit is composed of four Paraconsistent Artificial Neural Cells: one Analytical cell and three Learning Cells. Two Learning Cells are installed at the inputs of the Analytical Cell permitting the Unit to capture coincidences of repetition of signals applied at the inputs. The repetitive coincident patterns captured in this Unit may come with equal patterns PA= 0.0 and PB = 0.0 or PA =1.0 and PB = 1.0, or with repetitive coincidences of pairs of unequal patterns like: PA= 0.0 and PB = 1.0 or PA =1.0 and PB = 0.0. In any case, if there is repetitive coincidence of patterns pairs, there will be an increase of the Resultant Degree of Evidence at the output as consequence. Figure 10.19 shows the lPANURPP and its simplified algorithm.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 10. Paraconsistent Artificial Neural Units
PA
PB
C
LF
253
C
C
lPANC
C
lPANC LF
ulF
ulF
1
2
μ1A
μ1B
1- In cell 1 apply the learning algorithm in signal PA 2- In cell 2 apply the learning algorithm in signal PB
C
aPANC
CerTF
φE
3
2- Compute:
μ1 C
LF
In the Analytical Cell: 1-Compute: PBC = 1 − P1B
C
3- Compute:
lPANC
4- Compute: CerCSV =
ulF
μ + PBC μctr = 1A 2 ( μ1A − PBC ) + 1 μE = 2
4
1 + CerTF 2
and
CerCIV =
1 − CerTF 2
5- Apply the algorithm of the Analytical Cell 6- Apply the learning algorithm of cell 4 using μE as pattern.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μEr Figure 10.19 Learning Paraconsistent Artificial Neural Unit through Similarity of Patterns lPANURPP.
10.7.6 The Paraconsistent Artificial Neural Unit with Maximum Function (PANUmaxf ) The Paraconsistent Artificial Neural Unit with Maximum Function-PANUmaxf consists of several Simple Logic Connection Cells of maximization (OR) and a single Crossing Cell. The Crossing Cell has the function of directing the greater value Degree of Evidence signal to the output. It is then compared to a Certainty Tolerance Factor. The number of Simple Logic Connection Cells depends on the number of signals received for analysis. The maximization among the signals is done through the configurations of the Simple Logic Connection Cell and the greater value signal, and has the Degree of Evidence presented at the Unit output. Figure 10.20 presents two kinds of Neural Units with Maximum Function; one for even number signals, and another for odd numbers. To obtain the maximum among
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
254
Chapter 10. Paraconsistent Artificial Neural Units
a great number of signals applied at the unit inputs, just repeat the configurations of the Simple Logic Connection Cells, increasing the number of inputs.
μ 1A1
μ 1A2
μ 1B1 μ 1B2
PANCSiLC 1 Max
μ 1A1 μ 1A2 μ 1A3
PANCSiLC 2 1
Max
PANCSiLC 1
2 Max
PANCSiLC 2
PANCSiLC 3 Max
Max
3
cPANCD
CerTF
4
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
2
C
C
CerTF
1
μE
(a) Unit with Maximum Function for even number signals at the input.
cPANCD
3 μE
(b) Unit with Maximum Function for odd number signals at the input.
Figure 10.20 Paraconsistent Artificial Neural Unit with Maximum Function for several inputs.
10.7.7 The Paraconsistent Artificial Neural Unit with Minimum Function (PANUminf ) The Paraconsistent Artificial Neural Unit with Minimum Function PANUmimf consists of several Simple Logic Connection Cells of minimization (AND) and a single Crossing Cell. The Crossing cell has the function of directing the lower value Degree of Evidence signal to the output. It is then compared to a Certainty Tolerance Factor, which will define if the minimum value is enough to carry on or if it is considered undefined. The number of Simple Logic Connection Cells depends on the number of signals received for analysis. The functioning is identical to the Unit with Maximum
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 10. Paraconsistent Artificial Neural Units
255
Function; with the difference that now, the minimization is done in the signals. Figure 10.21 presents two kinds of Neural Units with Minimum Function.
μ 1A1
μ 1A2
PANCSiLC 1 Min
μ 1B1 μ 1B2
μ 1A1 μ 1A2 μ 1A3
PANCSiLC 2 1
Min
PANCSiLC 1
2 Min
PANCSiLC 2
PANCSiLC 3 Min
Min
3
cPANCD
CerTF
4 μE Copyright © 2010. IOS Press, Incorporated. All rights reserved.
2
C
C
CerTF
1
(a) Unit with Minimum Function for even number signals at the input.
cPANCD
3 μE
(b) Unit with Minimum Function for odd number signals at the input.
Figure 10.21 Paraconsistent Artificial Neural Unit with Minimum Function for several inputs.
10.7.8 The Paraconsistent Artificial Neural Unit of Selective Competition (PANUseC) The Selective Competition Paraconsistent Artificial Neural Unit (PANUseC) is composed of several Simple Logic Connection Cells prepared for a maximization process (OR) and a single Selective Logic Connection Cell. The number of Simple Logic Connection Cells for the maximization will depend on the number of signals received for analysis. In the Paraconsistent Neural Network, the internal processing is done with Degree of Evidence signals, therefore, in the Selective Competition Paraconsistent Artificial Neural Unit, the greater value Degree of Evidence signal will be presented at the Unit output.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
256
Chapter 10. Paraconsistent Artificial Neural Units
The signal, which presents the lower value does not win the competition and will present an output value of 0.5, indicating an indefinition. Since the output cell is a Selective Logic Connection Cell in a maximization process, when the values are equal, the one applied on the right of the cell wins. Figure 10.22 shows two kinds of Selective Competition Neural Units.
μ 1A1
μ 1A2
μ 1B1 μ 1B2
μ 1A1 μ 1A2 μ 1A3
μ 1B2
PANCSiLC 1
PANCSiLC 1
PANCSiLC 2
Max
Max
Max
PANCSiLC 2 Max
PANCSeLC 3
PANCSeLC 3
Max
μrA
μrB
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μrA (a) Competition Unit for even number signals at the input.
Max
μrB
(b) Competition Unit for odd number signals at the input.
Figure 10.22 Paraconsistent Artificial Neural Unit of Competition with Selection of Maximum for several inputs.
For competition among a great number of signals at the inputs, repeat the configurations of the Simple Logic Connection Cells interconnected to a Selective Logic Connection Cell. 10.7.9 The Paraconsistent Artificial Neural Unit of Pattern Activation (PANUPact) The Paraconsistent Artificial Neural Unit of Pattern Activation (PANUPact) is composed of one Simple Logic Connection Cell in a maximization process and Crossing and Decision Cells. The number of Crossing and Decision cells depends on
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 10. Paraconsistent Artificial Neural Units
257
the number of patterns desired for activation. Figure 10.23 shows an Activation Unit for n activated patterns.
μrA
μrB PANCSiLC 1 Max
lPANC 1
lPANC 2
cPANCD
μE1
lPANC 3
cPANCD
μE2
cPANCD
lPANC n
cPANCD
μE3
μE4
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 10.23 Paraconsistent Artificial Neural Unit of Activation PANUPact.
One of the greater value signals applied at the Simple Logic Connection Cell of Maximization will be the limit to have the maximum value1 at the output of the Crossing Cells, whose Learning Cells learned the pattern. If the two input signals present levels of indefinition, the Crossing and Decision Cells will present levels of indefinition 0.5 at their outputs. 10.8 Final Remarks In this chapter we presented a few interconnections of Paraconsistent Artificial Neural Cells to compose Paraconsistent Neural Units with the functions of pattern detection, signal directing and control. In AI projects, these conveniently interconnected Units will carry out signal processing of the Degree of Evidence in a Paraconsistent Artificial Neural Network. The Paraconsistent Artificial Neural Units will be the basic components utilized in the construction of the Paraconsistent Artificial Neural Network (PANNet). The functions of the Paraconsistent Neural Units are essential for data connection
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
258
Chapter 10. Paraconsistent Artificial Neural Units
processed in parallel in Paraconsistent Artificial Neural Networks. The Paraconsistent Neural Units will compose the forms and give the structural dimensions for the architecture of the Paraconsistent Neural Network, whose final objective is to present a global functioning similar to the signal processing performed by the human brain. All the Units presented carry out analysis through very simple equations. This will facilitate their implementation in Neural Networks through computational programs, constituting an excellent tool for projects in the area of Neurocomputing. In this chapter, one of the most important Units studied, was the ParaPerceptron, which is a set of interconnected cells having their functioning and characteristics inspired on the Biological Neuron, and because of this, is being considered a Paraconsistent Artificial Neuron. With several cell configurations, the Paraconsistent Artificial Neural Para-Perceptron may be constructed in various ways with functional characteristics close to those presented by the Biological Neuron. It is projected exclusively with Paraconsistent Artificial Neural Cells and has the capacity of receiving n analogical signals of positive real values between 0 and 1, analyzing them through fundamental equations from the Paraconsistent Annotated Logic with annotation of two values (PAL2v), and supplying results with values in the real closed interval [0,1], It is also has the capacity of learning through the Learning Cell, memorizing and unlearning patterns. In this process, the speed of Learning and Unlearning may be adjusted; therefore the learning procedure may be performed slowly or quickly, controlled by external values represented by the Learning lF and Unlearning Factor ulF. The Para-Perceptron model permits the memorization of the learned patterns and offers the possibilities of consulting these patterns stored by external sources. We presented other configurations of PANUs besides the Para-Perceptron, therefore, due to the number of types of Paraconsistent Artificial Neural Cells and the ease for connections, other Units with functions different from those presented may easily be implemented. Resultant Degree of Contradiction that appears in the Analytical Cells or other kinds of cells from the family presented in previous chapter may be utilized as reference for analysis. Most importantly is that the results of the configurations demonstrate the possibility for real applications of Paraconsistent Artificial Neural Cells in signal processing in a distributed and parallel form, with analogue characteristics to the electrical signal processing, which occurs in human brain. It can be seen that these models, like the Classical Neural Networks may easily be applied in Decision Making System in the area of Artificial Intelligence. The test results presented in this chapter as suggestions of several configurations permit implementations of projects of the complete Biological Neuron models, which will compose Paraconsistent Artificial Neural Networks (PANNet) for several applications in Artificial Intelligence. In the next chapters, the PANUs will be connected in Paraconsistent Artificial Neural Network, to form several analysis systems able to perform signal processing similar to determined mental functions of the brain. Exercises 10.1 Give the definition of a Paraconsistent Artificial Neural Unit (PANU). 10.2 Make an analogy describing similar parts of a Biological Neuron and a Paraconsistent Artificial Neuron. 10.3 How is a Paraconsistent Artificial Neuron Para-Perceptron composed?
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Chapter 10. Paraconsistent Artificial Neural Units
259
10.4 Develop in programming language C, or another common programming language executable program for the Para-Perceptron model of figure 10.12. 10.5 Develop in programming language C, or another common programming language executable program for the Para-Perceptron model of figure 10.13. 10.6 Develop in programming language C, or another common programming language executable program for the Para-Perceptron model of figure 10.14. 10.7 Describe the functioning of the Learning Paraconsistent Artificial Neural Unit with activation through maximization (lPANUAM), and develop the executable program in programming language C, or another common programming language. 10.8 Describe the functioning of the Learning Paraconsistent Artificial Neural Unit of Control and Pattern Activation (lPANUCPA), and develop the executable program in programming language C, or another common programming language. 10.9 Describe the functioning of the Learning Paraconsistent Artificial Neural Unit with instantaneous analysis (lPANUIA), and develop the executable program in programming language C, or another common programming language. 10.10 Describe the functioning of the Learning Paraconsistent Artificial Neural Unit through Pattern Equality (lPANUPE), and develop the executable program in programming language C, or another common programming language. 10.11 Describe the functioning of Learning Paraconsistent Artificial Neural Unit through repetition of pattern pairs (lPANURPP), and develop the executable program in programming language C, or another common programming language. 10.12 Describe the functioning of the Paraconsistent Artificial Neural Unit with Maximum Function (PANUmaxf), and develop the executable program in programming language C, or another common programming language. 10.13 Describe the functioning of the Paraconsistent Artificial Neural Unit with Minimum Function (PANUmimf), develop the executable program in programming language C, or another common programming language. 10.14 Describe the functioning of the Paraconsistent Artificial Neural Unit of Selective Competition (PANUseC. 10.15 Develop the executable program of the Paraconsistent Artificial Neural Unit of Selective Competition (PANUseC) in programming language C, or another common programming language. 10.16 Describe the functioning of the Paraconsistent Artificial Neural Unit of Pattern activation (PANUPact). 10.17 Develop the executable program of the Paraconsistent Artificial Neural Unit of Pattern Activation (PANUPact) in programming language C, or another common programming language.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
260
CHAPTER 11
Paraconsistent Artificial Neural Systems Introduction The Paraconsistent Artificial Neural Systems (PANS) are modules configured and constructed exclusively by Paraconsistent Artificial Neural Units (PANUs) studied in the previous chapter. The set of Paraconsistent Neural Units form the Paraconsistent Artificial Neural Systems, whose function is to provide signal treatment similar to the processing that occurs in the human brain. Depending on the interconnections and the types of the components of the PANUs, the Paraconsistent Artificial Neural Systems acquire special properties to treat and to compute the signals that will be processed by the network. Various interconnected PANUs of distinct functions forming two types of Systems are used for this study: a) Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl). This System is configured to process signals, according to Hebb’s learning rules. Within these concepts, the cells are conditioned to present certain patterns at the output. These patterns are obtained through the repetition of coincidences of patterns applied at the input.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
b) Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT). This System promotes the contradiction treatment continuously between the information signals, based on the concepts of the Paraconsistent Annotated Logic.
11.1 Paraconsistent Artificial Neural System of Conditioned Learning (PANS Cl) The Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl) presented in this book is composed of four types of Paraconsistent Artificial Neural Cells, being three Learning Cells, one Analytical Connection Cell, one Decision Cell, and two Simple Logic Connection Cells for a maximization process. The interconnections among these different kinds of cells form the Artificial Neural Units of Pattern Learning, whose functioning, together with other Units, make associations among the signals applied simultaneously at the inputs. These signal associations create a Conditioned Learning process in the cells similar to Hebbian learning. All the neural Cells belonging to the System are based on the Paraconsistent Annotated Logic. The simplicity of the equations involved makes it easier to implement the Paraconsistent Artificial Neural System in computational programs of decision making Systems in AI.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 11. Paraconsistent Artificial Neural Systems
261
11.1.1 Conditioned Learning The studies about the brain indicate that the biological neurons have the capacity to learn. The biological neural Systems work in a dynamic process, which in some way modify their structures to incorporate new information, and thus acquire abilities in a constant learning process. The Paraconsistent Artificial Neural Systems try to process data similarly to the electrical signals processing which happens in the human brain. Following this line of thought, we will study the Paraconsistent Artificial Neural System of Conditioned Learning PANSCl, which is projected to present a functioning that models the conditioned or reinforced Learning.
11.1.1.1 Conditioned Learning
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
The Conditioned Learning process was first demonstrated through Pavlov experiences. Based on his studies, the conditioned or reinforced Learning considers that the associations among sensory patterns, with previous occurrence, may be utilized to modify the behavior.
Figure 11.1 Ivan Pavlov (1849-1936) Russian physiologist who investigated conditioned reflexes in animals.
The demonstration of the classical Pavlovian learning involved a dog, a dish of food and a bell. Initially, the dog was presented a dish of food, and through a surgical process carried out earlier on the dog, the saliva produced in the digestive system was measured. Without the food, a bell was rung and it was verified that the sound of the bell did not cause salivation in the dog. The dish of food was presented to the dog repeated times along with the sound of the bell. After a certain time, it was verified that only the sound of the bell was enough to cause salivation. With this experience, Pavlov succeeded in conditioning a sound stimulus to an action of salivation, which initially did not exist. The dog’s natural process salivating in the sight of the food is called nonconditioned stimulus. The learning with conditioned stimulus occurred during the time the bell rang while the food was presented.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
262
Chapter 11. Paraconsistent Artificial Neural Systems
11.1.1.2 Hebb’s Learning rules The results of Pavlov’s experiments were improved to the studies of biological neurons, which resulted in Hebb’s learning rules, which are the most important functions modeled by a Classical Artificial Neural Network. Hebbian Learning is a continuity of this research line, extended to the functioning of the neural cells or biological neurons. It is structured in the following hypothesis: “When an axon of a cell A is close enough to excite a cell B, and, repetitively or persistently, takes part of this cells shot, some growth process or metabolic change occurs in one or both cells, such that the efficiency of A, as a cell capable of triggering B, is increased”. Once the existence of Conditioned Learning is proved, it was verified that it was possible to model it by means of an Artificial Neural Network. The basic functioning of the Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl) tries to follow the fundamental principles of Hebb’s Conditioned Learning hypothesis. Based on Pavlov’s experiments and Hebb’s learning rules, PANSCl functions by processing signals, always considering pattern associations, which appear repetitively in parts of the Paraconsistent Artificial Neural Network.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
11.1.1.3 The component cells of the Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl) Since the functioning of paraconsistent cells were studied in previous chapters, in this section we will summarize the main characteristics of the cells that compose the Paraconsistent Artificial Neural System of Conditioned Learning. The first cell utilized in the PANSCl is the Learning Paraconsistent Artificial Neural Cell (lPANC) represented in figure 11.2 (a). It is able to learn and unlearn any pattern represented by a real value between 0.0 and 1.0. Considering that only the values 0.0 representing the Falsehood Pattern and 1.0 representing the Truth Pattern are applied at the input, the cell is able to, in the beginning of the application of the values, detect which pattern will be learned, and prepare for the learning. The second is the Analytical Paraconsistent Artificial Neural Cell (aPANC) represented in figure 11.2 (b). This cell has the function of processing signals in accordance to the Paraconsistent Annotated Logic and making the interconnections with the others. Two factors, the Contradiction Tolerance Factor CtrTF and the Certainty Tolerance Factor CerTF determine the output characteristic, therefore, in this cell, the output depends on the conditions of the Degrees of Certainty and of Contradiction. The third is the Artificial Neural Cell of Decision (PANCD) represented in figure 11.2 (c). This cell has the function of defining an output signal by comparing the Decision Tolerance Factor with the value of the Resultant Degree of Evidence. The Decision factor divides itself into two inferior and superior limit values. If the Calculated Resultant Degree of Evidence is above the superior limit, the output is True, assuming the Resultant Degree of Evidence 1.0. If the Calculated Resultant Degree of Evidence is below the inferior limit, the output is False and the output Resultant Degree of Evidence assumes value 0.0. When the Calculated Resultant Degree of Evidence is between the superior and inferior limits, the output is an indefinition of value 0.5.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 11. Paraconsistent Artificial Neural Systems
μ1A
μ1
C
lPANC
CtrTF
aPANC
CerTF
ulF
μ1
μ2
C
C
LF
μ1B
263
μA
μB
C
DecTF S2 = φE
PANCD
PANCSiLC
1.1. Max
μE (k+1)
(a)
S1 = μE
S1
μrMax
(b)
(c)
(d)
Figure 11.2 Types of Paraconsistent Artificial Neural Cells utilized in the PANSCl
The fourth is the Paraconsistent Artificial Neural Cell of Simple Logic Connection for a maximization process (PANCSiLC) represented in figure 11.2 (d). The PANCSiLC receives two distinct input signals inputs and connects the greater value signal to its output. The Resultant Degree of Evidence value defines which of the two signals has the greater value. The maximization process is done through the comparison of the obtained Resultant Degree of Evidence with the value of indefinition 0.5. If the Resultant Degree of Evidence is greater or equal to 0.5, it means that the Degree of Evidence signal A is equal to or greater than the Degree of Evidence signal B, and if the Calculated Resultant Degree of Evidence is lower than 0.5, it means that the Degree of Evidence of signal A is lower than the Degree of Evidence of signal B.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
11.2 Basic Configuration of the PANSCl
The Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl) will be formed by the interconnection of three Learning Cells, two Analytical Cells and two Logic Connection Cells of maximization. It will also present two inputs where patterns represented by values between 0.0 and 1.0 will be applied. For a better, detailed account of the basic functioning of a PANSCl and using the basic concepts of PAL, value 0.0 representing the Falsehood Pattern and value1.0 the Truth Pattern are considered. The Conditioned Learning of the Paraconsistent System will follow steps similar to those verified in Pavlov’s experiments, whose results were important for the development of the Conditioned Learning of the biological neurons proposed by Hebb. Initially, we may consider a Paraconsistent Artificial Neural System of Conditioned Learning as a single module where two patterns are applied, and an output signal is obtained according to figure 11.3.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
264
Chapter 11. Paraconsistent Artificial Neural Systems
PA
PB
PANSCl
μE
Pattern B PB = Sound Stimulus (bell) Pattern A PA = Visual Stimulus (food) Output μE = Level of salivation PA
PB
μE
1
PA
0.5
High
2
0.5
PB
Low
3
PA
PB
High
4
0.5
PB
High
Steps
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 11.3 Comparison between PANSCL and Pavlov’s experiment.
Following the Conditioned Learning proposed by Hebb, pattern B (sound stimulus), initially, presents a value of indefinition 0.5, and pattern A (food), appearing repetitively at the input, provokes a Resultant Degree of Evidence μE (salivation) equal to 1, representing a natural conditioning of the System. In the second step, pattern B (sound stimulus) is active and pattern A (food) inactive. Since there has not been a conditioning of the System, the Resultant Degree of Evidence μE (salivation) is of low value. In the third step, pattern B (sound stimulus) is active and appears, repeated times, together with pattern A (food). The Resultant Degree of Evidence μE (salivation) is high, simulating the Conditioned Learning of the System. The fourth step of the training leaves pattern A (food) inactive, with a value of indefinition 0.5 and pattern B (sound stimulus) keeps being applied at the input. In this fourth step, the result of the Conditioned Learning appears when, even with the disappearance of pattern A (food), the value of the output Resultant Degree of Evidence μE (salivation) remains with a high value. In figure 11.4, the complete diagram of the PANSCl composed of interconnected Paraconsistent Artificial Neural Cells is presented. To make the understanding easier, let us describe the functioning by considering a test where signals applied at the inputs of the PANSCl, represented by real values between 0.0 and 1.0 are utilized as patterns. According to the configuration in figure 11.4, the Learning Cells C 1 and C2 are the receptors of the patterns applied at the inputs. Initially, Pattern P A applied insistently in Learning Cell C1 provokes an increase at output SA. When the cell C1 is totally trained, it is considered to have learned the pattern and output SA is equal to 1.0. In this initial process, Pattern PB applied in Learning Cell C2 is undefined, with value 0.5; therefore, at output SB the value remains undefined.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 11. Paraconsistent Artificial Neural Systems
PA C
265
PB C
C
lPANC
C
lPANC
LF
LF
C1
ulF
C2
ulF
SB
SA C
CtrTF
aPANC
C3
CerTF
φE
SC C
C
lPANC LF
C4
ulF
SD
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
PANCSiLC
PANCSLC C6
C5 Max
Max
SE
SF
C
DecTF
PANC D C7
μE Figure 11.4 Diagram of the Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl)
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
266
Chapter 11. Paraconsistent Artificial Neural Systems
The output value of the Analytical Cell C3 is calculated through the equation of the Resultant Degree of Evidence. Considering the condition of cell C1 totally trained, and cell C2 undefined, output SC of C3 is worth: (1 − 0.5) + 1 SC = = 0.75 2 The input of cell C3 is applied to the input of the Learning Cell C4, which learns the value of SC applied at its input. Since in this initial process the maximum value that will appear in SC is 0.75, cell C3 will present the maximum value SC = 0.75 at the output. The value 0.75 appears at the inputs of the two Logic Connection Cells of maximization C5 and C6. In cell C5, we have the result of the Learning Cell C1, which is SA=1.0 and the result of the Analytical Cell C3, which is worth SC= 0.75, therefore, as C5 is a cell which performs maximization, we will have the value 1.0 at output S E. In cell C6, another cell that performs maximization, we have its inputs SB=0.5 and SC= 0.75, therefore, at output SF, we will have value 0.75. In cell C7, the value of the output Resultant Degree of Evidence is calculated through the equation of the Resultant Degree of Evidence, therefore: (1 − 0.25) + 1 μE = = 0.875 2 Cell C7 is a Decision Cell where the Decision Tolerance Factor defines if the output is True, False or Undefined. Consider for example that the Decision Factor DecTF = 0.7. Then, is the superior limit and inferior limit are made calculations: 1 + DecFT 1 − Dec FT TLV = and FLV = 2 2 The Truth limit Value TLV = 0.85 FLV = 0.15 The Falsehood limit Value In this step the output of the Paraconsistent Artificial Neural System is considered high, therefore, the Degree of Evidence expresses a True situation of value 1.0. Consider now the situation in which only pattern PB is applied, and the input of pattern PA, remains with a value of indefinition 0.5. As Cell C1 has already been trained, when there was the first application of the undefined value of pattern PA, there will be now a feedback of the Unfavorable Degree of Evidence at its input, where we have value 1.0. This value is complemented and enters the equation of the Resultant Degree of Evidence with value 0.0. Therefore, output SA is calculated through equation: SA =
(0.5 − 0) + 1 = 0.75 2
The output value SB of Cell C2 will be calculated in the same way, except that it has not been trained, therefore, the output that will promote the feedback of the Unfavorable Degree of Evidence is an indefinition of value 0.5 and the Degree of Evidence that will enter the equation of the Resultant Degree of Evidence is pattern P B. Let us consider that cell C2 is prepared to learn the Truth Pattern, therefore, if PB= 1.0, the output result is:
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 11. Paraconsistent Artificial Neural Systems
SB =
267
(1 − 0.5) + 1 = 0.75 2
Like the values of SA and SB, the output value SC of the Analytical Connection Cell is found through the equation of the Resultant Degree of Evidence, therefore: SC =
(0.75 − 0.25) + 1 = 0.75 2
In the first step, the Learning Cell C4 only learned up to 0.75, therefore is has not learned the Truth Pattern, and we may consider that its output, initially 0.75, will be complemented, appearing in the equation of the Resultant Degree of Evidence as 0.25. We can calculate the cell input C4 through equation: SD =
(0.75 − 0.25) + 1 = 0.75 2
In the two Logic Connection Cells of maximization C5 and C6 we will have SE= 0.75 and SF = 0.75 as output, respectively. At the output of the Decision Cell C7, the value obtained is calculated through:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ ER =
(0.75 − 0.25) + 1 = 0.75 2
According to the adjustment of the Contradiction Tolerance Factor taken as an example, value 0.75 will be considered undefined by the Decision Cell C7, therefore the analysis expresses a situation of indefinition of value 0.5. If these values are repetitively applied at the inputs, the value of cell C 1 will get to a minimum value of indefinition equal to 0.5 and output SD of cell C4 decreases, and shortly after, it will suffer a gradual increase until it gets to the maximum value of 0.75. In the end, the values at the inputs of the Decision Cell C 7 will be SE = 0.75 due to the maximization performed by cell C5 and SF =1.0 because of the learning of cell C2, resulting therefore, in the maximum value of output μE = 0.875, considered a True situation. When the two patterns PA and PB appear simultaneously and repetitively, cells C1 and C2 learn their respective patterns, and outputs SA and SB remain with high value of 1.0 as a result of this learning. Cell C3, which is an Analytical Connection Cell, receives the values from SA and SB at the inputs, therefore, utilizing the equation of the Resultant Degree of Evidence it is easily verified that, when these values are high, the value of its output S C is also high. Cell C4 receives an input signal of value 1.0 and learns the Truth Pattern. As the coincidences of the two patterns applied at their two inputs occurs, it will gradually increase the value of output SD until it gets to a high value of 1.0. If all inputs of the maximization cells present values 1.0, therefore, the value of the Resultant Degree of Evidence at the output of the Decision Cell C7 is μ1R=1.0, and the output of the System is considered True. In these conditions, a Conditioned Learning was created. Even if one of the patterns is missing, the high output of cell C4, which learned the Truth
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
268
Chapter 11. Paraconsistent Artificial Neural Systems
Pattern generated by the detection of repetition of input patterns, prevails in the maximization cells C5 and C6, maintaining the conclusion. As an example, let us suppose that, after establishing the Conditioned Learning, one of the patterns is missing. For this calculus, we will choose that pattern PA remained undefined with value of 1/2. At cell C1 input we will have: SA =
At cell C2 input:
(0.5 − 0) + 1 = 0.75 2
SB =
(1 − 0) + 1 = 1.0 2
SC =
(0.75 − 0) + 1 = 0.875 2
SD =
(0.75 − 0.5) + 1 = 0.625 2
At cell C3 input:
At cell C4 input:
At cell C5 input, we have the maximum value between SA and SD: SE = 0.75 At cell C6 input, we have the maximum value between SB and SD: SE = 1.0 At cell C7 input:
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ ER =
(0.75 − 0) + 1 = 0.875 2
As the value of the Resultant Degree of Evidence is above the superior limit of decision, the conclusion is the high value considered as Truth. If this situation of indefinition of pattern A persists, the value at the cell C4 input will decrease until it gets to minimum SD= 0.75, and the minimum value at the output of the Decision Cell will be μE = 0.75. The System may unlearn the Conditioned Learning if pattern PA applied at the input of cell C1 changes from the state of indefinition to a state of Falsehood. According to what was defined, the Falsehood Pattern is applied with value PA= 0.0. The occurrence of repeated applications of pattern PA with value 0.0 makes the System forget the Conditioned Learning in which it was initially trained and be trained again with patterns different from those originally learned. With the repeated applications of PA= 0.0, output SA decreases gradually until it gets to 0.0. At cell C4 input, the value gets to SD= 0.5 and in the maximization cell C5 we would have the output value SE= 0.5, provoking an output of the Decision Cell of μE = 0.75, therefore a situation of indefinition. In this situation, according to what was seen in the description of the functioning of the Learning Cell, the confirmation of the new pattern learned by cell C1 will happen and its output SA is changed to value 1.0. With this procedure, foreseen in the Learning Algorithm, the tendency of the Resultant Degree of Evidence signal at the output of the System will increase until it gets to 1.0, constituting the new Conditioned Learning.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 11. Paraconsistent Artificial Neural Systems
269
11.2.1 Test with PANSCl
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
The functioning of the Paraconsistent Artificial Neural System of Conditioned Learning may be projected with modifications in the functioning of the Learning Cell in several ways. For example, the cells may function freely, that is, they keep learning or unlearning, according to the patterns detected at the inputs. Thus, the System becomes dynamic and modifies according to the patterns applied at the inputs. The cells may also have their output values memorized in such a way that, when the learning is completed, the cell is considered trained and the value of 1 is maintained at the output as Unfavorable Degree of Evidence. The System may also be projected for a hybrid functioning, in order to work in both ways. Figure 11.5 shows a table with the values obtained in the steps of the Conditioned Learning of the PANSCl, highlighting the learning and unlearning capacity of the System.
Figure 11.5 Table with test values with PANSCl.
Figure 11.6 shows the graph results obtained in the tests. The Conditioned Learning is one of the most important functions in signal processing based on the functioning of the human brain. Hence, the PANSCl demonstrates that the pattern association process with Conditioned Learning may easily be implemented with satisfactory results utilizing simple mathematical processes and totally structured by Annotated Paraconsistent Logic. Hebbian Learning provides the Paraconsistent Artificial Neural Networks with conditions for making associations in parts of the network, which deal with different propositions but which bring similarities in their information, this may be relevant for a parallel and processing of connections.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
270
Chapter 11. Paraconsistent Artificial Neural Systems
The Paraconsistent Artificial Neural System of Conditioned Learning is an innovation in the area of Neuroscience because it permits the implementation of projects with signal treatment very similar to the processing carried out by the brain. PA 1.0
0.5
kn
0.0 PB 1.0
P
B
0.5
kn
0.0 μE 1.0
kn
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
0.5 0.0 Figure 11.6 Graph results of tests carried out by the PANSCl.
11.3 Paraconsistent Artificial Treatment (PANSCtrT)
Neural
System
and
Contradiction
Classical Systems based on binary logic encounter difficulties to process data or information that come from uncertain knowledge. These data, which are pieces of information captured or received from several experts, generally come in the form of evidences and bring many contradictions. The Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT) is a module composed of conveniently interconnected Paraconsistent Artificial Neural Cells whose purpose is to carry out instantaneous analysis among three evidence signals using the concepts of Paraconsistent Annotated Logic.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 11. Paraconsistent Artificial Neural Systems
271
11.3.1 Pattern Generator for the PANSCtrT In the Paraconsistent Artificial Neural Network (PANNet), the three pattern signals PA, PB and PC applied at the input of the Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT) may be the resulting outputs from a set of Real Analytical Paraconsistent Artificial Neural Cells, previously studied and summarized as follows. The Real Analytical Paraconsistent Artificial Neural Cell (RaPANC) has the role of removing the effect produced by the Contradiction from the value of the Resultant Degree of Evidence. Therefore, after the analysis, the cell presents an output value of a refined Resultant Degree of Evidence named Real Degree of Evidence. In the RaPANC, the output results in a real filtered Degree of Evidence. It is a pure value from where; the value corresponding to the Contradiction existing in the input signals has already been removed. The values in these conditions are able of being compared to others. The PANSCtrT will receive six signals applied to three Analytical Cells, which will perform the initial paraconsistent analysis. This preliminary analysis will result into three evidence signals, all of which are free from the contradictions effects existing in each pair of signals. These output signals will be the patterns applied to Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT). The three Analytical Cells form a pattern-generating block as shown in the configuration of figure 11.7. μ2A μ2B
μ1A μ1B
C
C
C
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
μ3A μ3B
RaPANC
RaPANC
RaPANC
A
B
C
SA
SC
SB
PA
PB
PC
Figure 11.7 Generation of pattern signals through RaPANC for the PANSCtrT.
11.3.2 PANSCtrT Block Diagram A Paraconsistent Artificial Neural System for Contradiction Treatment (PANSCtrT) is considered as being a module composed of 8 interconnected cells, with four Analytical Cells and four Simple Logic Connection Cells, where, from the Simple Logic Connection Cells two are maximization cells and the other two are minimization cells.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
272
Chapter 11. Paraconsistent Artificial Neural Systems
The PANSCtrT have three inputs to receive the real value signals PA, PB, and PC, which vary between 0.0 and 1.0, and which were generated by the pattern block generator as exposed above. The PANSCtrT has one single output that presents the result of the analysis as a Resultant Degree of Evidence μE. Figure 11.8 shows the representative block of a PANSCtrT. PA
PB
Pc
PANSCtrT
μE
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 11.8
PANSCtrT representative block
By using the concepts of PAL studied in the previous chapters, an input signal of value 1.0 is considered as the expression of a True proposition, and a signal of value 0.0 is considered as a False proposition. The indefinition is represented by a signal of value 0.5. The three signals, when leaving the pattern generator block, are applied at the inputs of the PANSCtrT where they are instantaneously analyzed by neural cells. These cell will, first verify and detect contradictions among them. If the three signals present equal values, obviously, there is no Contradiction. However, as they vary from 0.0 to 1.0, and present independence, there may be contradictions in various levels, in a continuous analysis process. In any situation, whether there are contradictions or not, a signal, which represents the Resultant Degree of Evidence of the paraconsistent analysis of the three values, is obtained at the output. In a Paraconsistent Artificial Neural Network, the resultant signal of the PANSCtrT may be sent to other regions of the network so that the other Systems perform new analysis, which may be conclusive for making a decision.
11.3.3 The basic configuration of the PANSCtrT The configuration of a Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT) is composed of eight cells: Four Analytical Connection Cells, two Simple Logic Connection cells for a maximization process, and two Simple Logic Connection Cells for a minimization process. The types of neural
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 11. Paraconsistent Artificial Neural Systems
273
cells in the PANSCtrT are practically the same as those utilized in the PANSCl. The Simple Logic Connection Cells are now cells of minimization, and differ from those presented before, presenting at their output a lower value signal between the two applied at their inputs. PA
PB
PC
C
CtrTF
aPANC
C
C1
CtrTF φE
CerTF
aPANC
C
φE
CerTF
SA
aPANC
CtrTF
C2
CerTF
φE
SC
SB
PANCSiLC
PANCSiLC
C4
C5
Max
Min
SD
SE
PANCSiLC
PANCSiLC C6
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
C3
Max
Min
SF
SG C
CtrTF
aPANC
C8 φE
CerTF
μE Figure 11.9 Paraconsistent Artificial Neural System of Contradiction Treatment.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
C7
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
274
Chapter 11. Paraconsistent Artificial Neural Systems
The Paraconsistent Artificial Neural System of contradictory analysis PANSCtrT will receive three input signals and will present a result value through a paraconsistent analysis. This value will be the consensus among the three signals. The existing contradictions between the two values are added to the third value, in such a way that, the value proposed by the majority prevails at the output. In the PANS CtrT, the analysis may be done by the cells instantaneously in real time. Patterns P A, PB and PC are real values varying between 0.0 and 1.0, which represent information concerning the same proposition. When the pattern presents value 1.0, it means that the information is representative of a True proposition. When the pattern presents a value 0.0, it means a representative signal of a False proposition. An Undefined proposition is represented by a pattern of value 0.5. When the three signals present value 1.0, it means that they are supplying information concerning a proposition with a connotation of Truth, without contradictions. If the three signals present value 0.0, the information concerning the proposition will give a connotation of Falsehood to the proposition. If there are differences in the values between two signals, it means that there is Contradiction, therefore a third expert, whose information will help in decision making, must be consulted. In the Paraconsistent Artificial Neural System of Contradiction Treatment, the consultation with a third expert happens instantaneously and continually. In this form of analysis, all the pieces of information, contradictory and from the third expert, are relevant, and therefore will be considered in the result. The first layer of the System is composed of three Analytical cells: C1, C2 and C3, whose signals are analyzed through the equation of the Resultant Degree of Evidence, obtaining the output signals S A, SB and SC. ( P − PBC ) + 1 SA = A Where: PBC = 1 − PB 2 (P − P ) + 1 S B = B CC Where: PCC = 1 − PC 2 ( P − PAC ) + 1 SC = C Where: PAC = 1 − PA 2 In the internal layers, cells C4 and C6 constitute a neural unit of maximization of three inputs, and cells C5 and C7 are Neural Units of minimization. In the Neural Unit of maximization, the highest value among outputs SA, SB and SC obtained by the analyses carried out by the cells on the first layer, will appear at output S G. In the Neural Units of minimization, the output resultant value SE will be the lowest at outputs SA, SB and SC. Cell C8 utilizes the equation of the Resultant Degree of Evidence to carry out the last analysis among the signals presented at outputs S F and SG. Therefore, the equation utilized by cell C8 is: ( S − SGC ) + 1 μ ER = F Where: SGC = 1 − SG 2 11.3.4 Tests with PANSCtrT Tests made with the PANSCtrT applied values considered relevant for analysis. The table of figure 11.9 presents the results obtained when the values: 1.0, 0.75, 0.5, 0.25 and 0.0 are applied at the inputs PA, PB and PC.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 11. Paraconsistent Artificial Neural Systems
PA
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
μE
PB
PC
0 0 0 0 0 0.25 0.25 0.25 0.25 0.25 0.5 0.5 0.5 0.5 0.5 0.75 0.75 0.75 0.75 0.75 1.0 1.0 1.0 1.0 1.0
0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0
0.0000 0.0625 0.1250 0.1875 0.2500 0.0625 0.1875 0.2500 0.3125 0.3750 0.1250 0.2500 0.3750 0.4375 0.6250 0.1875 0.3125 0.4375 0.5625 0.6250 0.2500 0.3750 0.5000 0.6250 0.7500
PA
PB
PC
0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25
0 0 0 0 0 0.25 0.25 0.25 0.25 0.25 0.5 0.5 0.5 0.5 0.5 0.75 0.75 0.75 0.75 0.75 1.0 1.0 1.0 1.0 1.0
0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0
PA
PB
PC
μE
0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75 0.75
0 0 0 0 0 0.25 0.25 0.25 0.25 0.25 0.5 0.5 0.5 0.5 0.5 0.75 0.75 0.75 0.75 0.75 1.0 1.0 1.0 1.0 1.0
0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0
0.1875 0.3125 0.4375 0.5625 0.6250 0.3125 0.3750 0.5000 0.6250 0.6875 0.4375 0.5000 0.5625 0.6875 0.7500 0.5625 0.6250 0.6875 0.7500 0.8125 0.6250 0.6875 0.7500 0.8125 0.9375
μE
PA 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5
0.0625 0.1875 0.2500 0.3125 0.3750 0.1875 0.2500 0.3125 0.3750 0.4375 0.2500 0.3125 0.4375 0.5000 0.5625 0.3125 0.3750 0.5000 0.6250 0.6875 0.3750 0.4375 0.5625 0.6875 0.8125 PA 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
PB
PC
0 0 0 0 0 0.25 0.25 0.25 0.25 0.25 0.5 0.5 0.5 0.5 0.5 0.75 0.75 0.75 0.75 0.75 1.0 1.0 1.0 1.0 1.0
0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0
275
PB
PC
0 0 0 0 0 0.25 0.25 0.25 0.25 0.25 0.5 0.5 0.5 0.5 0.5 0.75 0.75 0.75 0.75 0.75 1.0 1.0 1.0 1.0 1.0
0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0 0 0.25 0.50 0.75 1.0
μE 0.1250 0.2500 0.3750 0.4375 0.5000 0.2500 0.3125 0.4375 0.5000 0.5625 0.3750 0.4375 0.5000 0.5625 0.6250 0.4375 0.5000 0.5625 0.6875 0.7500 0.5000 0.5625 0.6250 0.7500 0.8750
μE 0.2500 0.3750 0.5000 0.6250 0.7500 0.3750 0.4375 0.5625 0.6875 0.8125 0.5000 0.5625 0.6250 0.7500 0.8750 0.6250 0.6875 0.7500 0.8125 0.9375 0.7500 0.8125 0.8750 0.9375 1.0000
Figure 11.10 Table with the test results with PANSCtrT applying several values at inputs PA, PB and PC.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
276
Chapter 11. Paraconsistent Artificial Neural Systems
The table of previous figure presents the values obtained at the output μE with the Contradiction Tolerance Factor CtrTF = 1.0 and the Certainty Tolerance Factor CerTF = 1.0. Therefore, these values are adjusted this way to avoid influencing the results of the analysis. Figure 11.11 present the same results from the table in graph form for a better visualization. PA 1.0 0.75 0.50 0.25 0.0
PB
Kn
1.0 0.75 0.50 0.25 0.0 Kn
PC
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1.0 0.75 0.50 0.25 0.0
μE
Kn
1.0 0.75 0.50 0.25 0.0 Kn Figure 11.11 Test results with the PANSCtrT in graph form.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 11. Paraconsistent Artificial Neural Systems
277
The results obtained in the tests with PANSCtrT show that the system has the capacity of processing the signals in real time, reducing the contradictions rapidly and instantaneously. In large Paraconsistent Artificial Neural Networks it is possible to interconnect Systems of this kind to process signals similarly to the processing of the human brain.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
11.4 Final Remarks The Systems presented in this chapter demonstrate the potentiality of application of the Paraconsistent Artificial Neural Cells (PANCs). We verify that when the PANCs are conveniently interconnected, they may compose configurations that provide uncertain signal treatment, even when contradictory. Two kinds of Paraconsistent Artificial Neural Systems (PANSs), which may be used in Uncertainty Treatment Networks with PAL2v, were studied in this chapter as suggestion for application in IA. The first kind was the Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl), which works by making a signal treatment, based on the conditions imposed by Hebbian learning law. The System of Conditioned Learning, based on the concepts of Pavlov’s experiments and Hebbian Learning rules are of extreme importance fro the modeling of the brain functions. The second kind was the Paraconsistent Artificial Neural System of Contradiction Treatment PANSCtrT, which continuously analyzes signals applied at the inputs, deciding about contradictory situations. The PANSCtrT is an effective contribution to the studies of neural networks because it presents an innovating method on contradictory signal treatment. The results show objectively and practically, new forms of application in Neural Networks in which contradictory signal treatment may be performed in real time using components of easy construction in computation projects. We verify that the Paraconsistent Artificial Neural Systems were totally constructed with Paraconsistent Artificial Neural Cells and are described by simple mathematical equations, making their implementation through software or hardware easier. The Systems presented were accompanied of tests, which resulted in values that show their efficiency and applicability. We believe this unprecedented method of signal treatment based on the theory of the Paraconsistent Annotated Logic, due to its great simplicity when compared to other methodologies, will permit new neural biological functions to be better understood and interpreted. Due to the facility of constructing the cells in Neurocomputation projects, the possibilities of application are great, opening a vast field for new and promising research in this area. Exercises 11.1 Explain how the Paraconsistent Artificial Neural Systems (PANSs) are constituted. 11.2 How many Paraconsistent Artificial Neural Cells (PANCs) compose the Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl)? 11.3 What kinds of Paraconsistent Artificial Neural Cells (PANCs) compose the Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl)? 11.4 What is the process known as “Non-conditioned Learning”?
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
278
Chapter 11. Paraconsistent Artificial Neural Systems
11.5 What does “Non-conditioned stimulus” mean in the Conditioned Learning process? 11.6 Describe the Hebbian learning rule. 11.7 Describe the process known as “Conditioned Learning”. 11.8 How does the Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl) work in relation to Pavlov’s experiments and Hebb’s learning rules? 11.9 Give the kind and the main characteristics of the Paraconsistent Artificial Neural Cells (PANCs) component of Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl). 11.10 Outline the configuration of the Paraconsistent Artificial Neural System of Conditioned Learning (PANSCL) with all component PANCs. 11.11 Describe the functioning of the Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl). 11.12 Highlight the importance of the Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl) in Neurocomputation. 11.13 Develop the executable program of the Paraconsistent Artificial Neural System of Conditioned Learning (PANSCl) in programming language C, or another common programming language. 11.14 What is a Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT) ? 11.15 How is pattern generator for a Paraconsistent Artificial Neural System of Contradiction Treatment (PANSC trT) composed? 11.16 Why is the Real Analytical Paraconsistent Artificial Neural Cell (RaPANC) used to compose the pattern generator for the PANSCtrT) ? 11.17 Draw the configuration and describe the functioning of the pattern generator for the Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT). 11.18 Develop the executable program of the pattern generator for the Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT) in programming language C, or another common programming language. 11.19 How many Paraconsistent Artificial Neural Cells compose the Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT) to analyze three patterns? 11.20 Give the kind and the main characteristics of the Paraconsistent Artificial Neural Cells (PANCs) components of the Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT). 11.21 Outline the configuration of the Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT) with all the component Paraconsistent Artificial Neural Cells (PANCs). 11.22 Describe the functioning of the Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT). 11.23 Describe the use of the Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT) in a decision making network. 11.24 Highlight the importance of the Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT) in Neurocomputation. 11.25 Develop the executable program of the Paraconsistent Artificial Neural System of Contradiction Treatment (PANSCtrT) in programming language C, or another common programming language.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
279
CHAPTER 12
Architecture of the Paraconsistent Artificial Neural Networks
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Introduction The Paraconsistent Artificial Neural Networks (PANNet) are data processing systems inspired by the physical organization of the human brain. When compared to the digital circuits that work on computers, the biological neurons are slow, but form an enormous network of intensely interconnected cell, in such a way that they all operate at the same time, so that the processing is rapidly concluded. According to studies developed in this area, it is known that this parallel functioning is what provides the brain with conditions to execute extremely complex tasks, such as image interpretation and sound comprehension, in a small fraction of second. It was verified that the computer, despite being quick on signal switching, has trouble treating noisy information, such as: diffuse, contradictory and ambiguous. When the computer is forced to deal with these situations, for using Classical Logic with binary characteristics, the number of steps to finalize an analysis and define a conclusion increases, causing slowness to the response. Another important detail is that, unlike the computer, which presents difficulties with the loss of a single bit of information, the human brain distributes information among the neurons, and as a result all the processing system is tolerant to failures. Therefore, the short time that the brain takes to execute these tasks is due to the parallel processing which, unlike the computer’s sequential functioning, demands a small number of steps to perform the interpretation and execution. The tolerance to failure is one of the most important characteristics of the human brain, for if the neurons, in a brain area, were destroyed the neurons of another area would do the processing of the destroyed part, which denotes the existence of a distributed memory. Working in a parallel way, the human brain is always comparing and searching pattern matches that in some way were previously acquired. When matching occurs, the comparison is not done by using the concept of Classical Logic, where only two situations are accepted, but always by approximate means. Another characteristic of the brain function, besides the comparative analysis, is that the biological neurons have their own behavior. The local situation and the neuron particular action will contribute to form an extensive or complete action. This characteristic is called emergent behavior where each element or processing cell will make decisions based on the local situation, and each local decision made by each cell is gathered to form a global decision. Considering the characteristics found on studies about the human brain, when models that simulate its procedures are wanted, it must be taken into account that the circuits that constitute the computer were planned based on Classical Logic. Therefore, in the computer, the signal will be processed following the concepts of a binary logic
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
280
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
where binary bits represent only two states: False and True. However it is known that the human brain works differently from the computer and, for that reason, the difficulties to obtain a satisfactory model is large. The solution to find more adequate models is to use the Non- Classical Logic, which is not under the inflexible binary rules of Classical Logic. This way, on this proposition of neural network architecture, we will use the concepts of the Paraconsistent Annotated Logic with annotation of two values (PAL2v), which accepts and treats inconsistent situations. It was demonstrated by the results obtained in the previous chapters that the Paraconsistent Annotated Logic (PAL) when analyzing information, has characteristics closer to human beings, therefore, proves to be a great tool for the modeling of the brain. We saw that the greater efficiency shown by PAL is due to its structure. It accepts and considers other logical states not permitted by the Classical Logic, but are representative of real situations, such as contradiction and lack of definition. Unlike the computer, human beings solve all these situations in a small amount of time with barely any difficulties. From these results, the architecture of a Paraconsistent Artificial Neural Network (PANNet) is suggested in this chapter. The Paraconsistent Artificial Neural Units will be interconnected to form separated modules. It will be called Paraconsistent Artificial Neural System of knowledge acquisition (PANS). In practice, various systems with different configurations may be connected in parallel, to receive and analyze patterns applied at the input. With this, a signal treatment built in a parallel and distributed way will be permitted on the PANNet.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
12.1 Proposal of the Paraconsistent Artificial Neural Networks Architecture The different types of Paraconsistent Artificial Neural Cells (PANC) studied so far, permit that various forms of architecture and different interconnections among the units and systems may be projected. Next, a proposition of architecture of a Paraconsistent Artificial Neural Network (PANNet) will be described using a few models of Paraconsistent Artificial Neural Systems and Units presented in previous chapters. The architecture of a Paraconsistent Artificial Neural Network (PANNet) tries to follow the brain functioning model and performs signal treatment and data manipulation in a parallel and distributed way. We saw that following the procedures from the methodology of application of the Paraconsistent Annotated Logic (PAL), the interconnections among the Paraconsistent Artificial Neural Cells form Units (PANUs) with well-defined functions. Some Paraconsistent Artificial Neural Units (PANUs) will work in the PANNet as peripheral components, whose procedure is to direct signals to control data flow during the processing. Other Units are interconnected to constitute Paraconsistent Artificial Neural Systems (PANS) of internal processing, which will manipulate data by performing analysis, learning and associating patterns. All the procedures done by the PANU, for both signal management and value analysis, are based on calculations using the Paraconsistent Annotated Logic with annotation of two values equations. The architecture suggested is basically composed of four types of main modules, three of them disposed to learn in parallel, compare, and analyze signals and one logic reasoning module to control the network. Figure 12.1 shows the flow and directing of the signals, which represent patterns applied in a module of the PANNet. This module
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
281
will be interconnected with the other modules in the network, which obviously, have different treatment systems and knowledge acquisition.
Input Patterns
To the other modules Control 1
lF ulF
Paraconsistent Artificial Neural Unit of primary Learning and Patterns Consultation
CtrTF CerTF
Paraconsistent Artificial Neural System Output Patterns
lF ulF
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Goes to the other modules μE1 Degrees of Evidence from other modules μE2 μE3 ... μEn
Analysis Unit and Knowledge Acquisition
Paraconsistent Artificial Neural Unit of Pattern Activation
Control 2
μ E1
Paraconsistent Artificial Neural Unit of Selective Competition
12.1 Design of a separated module from the proposed architecture of the Paraconsistent Artificial Neural Network (PANNet)
The three types of modules disposed in parallel initially presented are: 1) Primary learning modules and pattern consultation. These modules are constituted by Neural Units of primary cells that have the role of recognizing, mastering and consulting the learned pattern.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
282
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
2) Signal directing modules for analysis execution. These modules are constituted by Neural Units whose roles are to compare and direct the signal flow to determined parts of the network. 3) Analysis modules and knowledge acquisition. These modules are constituted by Neural Units that analyze the signal values according to the Paraconsistent Logic, detect patterns and acquire knowledge through internal interconnections and training. These modules are interconnected forming small nuclei in the Paraconsistent Artificial Neural Network (PANNet) connected in parallel. Hence, in the PANNet the modules differ from one another, more precisely due to their respective PANS, which, on the learning, will make the knowledge acquisition for a particular analysis. The Certainty and Contradiction Tolerance Factors, as well as the learning and unlearning Factors of the cells that appear in the architecture will be values sent by a fourth module which is the Artificial Neural System of Logical Reasoning (PANSLR). These factors are created and sent externally through PANSLR that will control the PANNet. When applied on the module, the factors will enable adjustments of important characteristics for extensive network functioning.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
12.1.1 Description of the PANNet Functioning In a summarized way, in the signal processing of a PANNet, all the modules receive the same input patterns and, simultaneously, carry out the synthesis and the analysis necessary to make a conclusion. A resultant Degree of Evidence, represented by a value obtained through the PAL2v equations is the conclusion for every PANNet module. Afterward, the particular conclusion offered by each module is analyzed by a competition system that selects the most viable, represented by the highest Resultant Degree of Evidence, to be considered. Figure 12.2 presents the architecture of the Paraconsistent Artificial Neural Network (PANNet). This architecture is represented by the modules and blocks of Paraconsistent Artificial Neural Units (PANUs) and the detailed description of the functioning is as follows: In this proposed architecture, it is possible to describe the functioning of a typical Paraconsistent Artificial Neural Network (PANNet) through three functional states: Learning, Comparison and Consultation. Learning In the functional state of Learning, each PANNet module, through training, separately learns the patterns applied repeatedly at the input. Comparison In the functional state of Comparison, the PANNet recognizes patterns. The recognition is done by comparing the patterns which are applied at the input with the patterns learned and stored previously in the network modules. Consultation In the functional state of Consultation, the PANNet isolates a particular small nuclei module and permits its patterns to be activated only by an internal signal control. This state may be considered as the consultation procedure or data recovery which was previously learned. In the normal functioning of the PANNet, the values of the control signal 1 and 2 that appeared in the figures determine the network functional state. Other Operators such as Negation or Complementation may be added to the modules to improve the network control.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
283
Input Patterns
Control 1
lF ulF
Paraconsistent Artificial Neural Unit of Primary Learning and Pattern Consultation
CtrTF CerTF
Paraconsistent Artificial Neural System
lF ulF
Analysis and Knowledge Acquisition Modules
Output Patterns
Paraconsistent Artificial Neural Unit of Pattern Activation
Control 2
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Paraconsistent Artificial Neural Unit of Selective Competition
Figure 12.2 Architecture of the Paraconsistent Artificial Neural Network (PANNet)
12.2 Learning, Comparison, and Signal Analysis Modules of PANNet Next, we present a detailed description of each functional unit that constitutes the three modules of Learning, Comparison, and Signal Analysis of the Artificial Neural Network.
12.2.1 Paraconsistent Artificial Neural Unit of Primary Learning and Pattern Consultation In the Paraconsistent Artificial Neural Network Architecture, the PANU of primary learning and consultation receives the patterns applied at the input; therefore, by making an analogy with a brain model, it represents the brain area of sensorial
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
284
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
reception. The patterns are pieces of information captured by sensors, which in the case of human beings represent the senses: sight, hearing, taste, smell, and touch, which, in their turn, transform the physical features of the environment into electrical signals to be processed in the brain. In the PANNet, the patterns are represented by signals that come in the form of a value. This value may be treated according to the Paraconsistent Annotated Logic where the pieces of information are the Favorable and Unfavorable Degrees of Evidence, of real values between 0.0 and 1.0. For a better description of the process, consider the patterns as primary signals where value 1.0 is equivalent to the Truth pattern and value 0.0 means a Falsehood pattern. As seen before, the Paraconsistent Artificial Neural Cells, through a training process may learn and unlearn patterns. This procedure is obtained through an algorithm based on the Paraconsistent Annotated Logic, which first provides the cell with the pattern recognition to be learned, and then processes, in a systematic way, the Learning of the Falsehood and Truth patterns when these are repeatedly applied at the input. In the PANU of Primary Learning and Pattern Consultation, the pattern to be learned will be recognized in the learning cell, which is interconnected to a Crossing Cell. A Logical Connection Cell of maximization may control the pair of Learning/Crossing cells. These three Paraconsistent Artificial Neural Cells will constitute the PANU of Primary Learning and Pattern Activation, as suggested in the configuration of figure 12.3.
Figure 12.3 Primary Learning Unit and Activation of pattern A
If the input of the Favorable Degree of Evidence C1 is complemented, it is a learning cell of Falsehood Pattern where a repetitive input value 0.0 results in an output Resultant Degree of Evidence 1.0. If the Favorable Degree of Evidence input of
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
285
C1 is not complemented, then it is a cell trained to recognize the Truth Pattern; therefore when a value 1 is applied repeatedly at the input, a Resultant Degree of Evidence is obtained at the output. The two signals ctr1 and μR control the functioning of the Unit. Signal ctr1 carries out the control, defining the functional state of the Network. When signal ctr1 is 1.0, Learning cell C1 receives value 1.0 through maximization cell C2. Value 1.0 received from cell C2 is complemented at the Degree of Evidence input of C1, giving this one the characteristics of a trained cell. This way, the instant of the beginning of the analysis may be synchronized. When signal ctr1 is 0.0, cell C1 will be free to enter a normal Learning process; therefore it will carry out the recognition of the pattern to be learned. Depending on its current state, it may learn or unlearn patterns. Cell C 1 output is connected to Crossing Cell C3 whose Decision Factor is a control signal μR. Signal μR is considered resultant from the analysis carried out by the network and in the module, it will have the function of to extract the learned pattern. In normal functioning condition, signal μR is applied to the Crossing cell. This signal varies between 0.0 and 1.0, presenting the level considered as learned by the network. If the value learned by the network is lower than the one learned by the cell unit output, this value will appear at the output of the Crossing cell, otherwise the output will have the value of indefinition. It is verified that the extraction of the learned pattern is done through the very result of the processing, which happens in the Learning and Comparison functional states. This extraction may be done in a forced way where the signal μR = 0.5 will be the input. This value will be sent when the functional state is the Consultation, which will isolate the unit and present no limit restriction, forcing the occurrence of any value that the cell may have learned.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
12.2.2 Paraconsistent Artificial Neural Unit of Pattern Activation The Paraconsistent Artificial Neural Unit of Pattern Activation is suggested as being composed by only two cells: one of Selective Logic Connection and one of Simple Logical Connection, both prepared for a maximization process. Figure 12.4 presents the Artificial Neural Unit of Pattern Activation of the PANNet. One of the Unit inputs is value μ1a, which comes from the Paraconsistent Artificial Neural System output that composes the knowledge acquisition module and system treatment described previously. In the Learning functional state or in the Comparison state, signal ctr2 will have value 0.0; therefore, the Resultant Degree of Evidence μE1b won’t suffer any intervention that may change its value. Thus, the resulting signal of the analysis carried out by the system will be compared in the Competition Unit and will follow the normal procedure. When signal ctr2 has value 1.0 it will force the appearance of a maximum Degree of Evidence in the Competition Unit, causing a maximum output in the module. This will also cause, through PANCs combinations, an indefinition value of 0.5 in all the other component small nuclei modules of the PANNet. This procedure makes the PANNet module enter the Consultation functional state, whose value of μR = 0.5 will force the pattern of the selected unit, with its learned value, to present itself at the Primary Learning and Activation Neural Unit outputs.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
286
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
Figure 12.4 Paraconsistent Artificial Neural Unit of Pattern Activation.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
The Selection and signal directing cells to enable and disable the units and network systems, as well as to carry out the combinations of output signals may be configured with the PANC family presented in previous chapters.
12.2.3 Paraconsistent Artificial Neural Unit of Selective Competition The Paraconsistent Artificial Neural Unit of Selective Competition has two basic roles: a) Find the highest value of the Resultant Degrees of Evidence at the module outputs. b) Select the module of the winning Degree of Evidence. To exercise this function, the PANU of Selective Competition sends and receives Degrees of Evidence values for comparison; therefore, this unit is interconnected to all PANNet modules. For a matter of simplicity, we will consider a 4-module PANNet where the analysis results will be compared in the Selective Competition Unit, as seen in figure 12.5.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
287
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 12.5 Selective Competition Unit composed of four groups for the analysis of Resultant Degree of Evidence from four neural systems.
The unit shown in the previous figure is composed of four groups, where each group has two Simple Logic Connection Cells (PANCSiLC) and one Selective Logic Connection Cell (PANCSeLC). Hence, this unit performs the competition among four signals of four Resultant Degrees of Evidence modules. The Resultant Degrees of Evidence signals that enter the unit are the results of the Paraconsistent Analysis performed by modules in the network. This network involves four Neural Systems with distinct knowledge acquisition. When the patterns are presented at the PANNet input, the analysis is immediately performed in the four modules interconnected in parallel resulting in the four Degrees of Evidence signals μR1, μR2, μR3 and μR4. In the Competition Unit, each group is connected to a network module and receives the resultant degrees from the others, through the two Simple Logic Connection Cells of maximization. The result of the analysis performed in the first two cells is always the greatest value among the Resultant Degrees of Evidence of the analysis performed by the modules. The last cell is the Selective Logic Connection that performs the comparison among the maximum Resultant Degree of Evidence of the module connected to the group and the Resultant Degree of Evidence from the other PANNet modules. The Selective Cells of maximization analyze simultaneously the four Resultant Degrees of Evidence, including the result of their own module, and direct the maximum Degree of Evidence value to the activation of the memorized patterns.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
288
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
As the maximum Degree of Evidence is also applied to the other maximization cells of the Competition Unit then the other module output is automatically inhibited, always selecting the one with highest Resultant Degree of Evidence.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
12.2.4 Paraconsistent Artificial Neural System of Knowledge Acquisition (PANSKA) In the proposed PANNet architecture, the Paraconsistent Artificial Neural System of Knowledge Acquisition (PANSKA) is the module that distinguishes one module from another. It is basically composed of neural cells that present a proper and distinct analysis. In this module, the trained cells are interconnected to perform the recognition of a determined pattern with all the characteristics of a Paraconsistent Artificial Neural Network, such as: tolerance to noise, pattern association, conditioning, plasticity, memorization, and inconsistency treatment. In the learning process, each module is trained separately so that the knowledge acquisition system obtains its particular function. For example, let’s consider a PANNet for pattern recognition represented by two inputs. The patterns applied at the inputs may be valorized in the real closed interval between 0.0 and 1.0. Let’s also consider that, in the two network inputs, Falsehood patterns of value 0.0 and Truth patterns of value 1.0 will be applied. For both inputs with these kinds of patterns, only four conditions are necessary to cover all the possibilities: P A= 0.0 and PB= 0.0 or PA= 0.0 and PB=1.0 or PA=1.0 and PB=0.0 or PA=1.0 and PB=1.0. For the beginning of the training, a PANNet module is separated and patterns PA= 0.0 and PB = 0.0 are applied at the input several times. Figure 12.6 shows the cells of a Paraconsistent Neural System for a simple process of Pattern Recognition. To make the process clearer, the Crossing Cells that act on the feedback of the learning cells were omitted as studied on the Primary Learning Unit and Pattern Consultation. Cells C1 and C2 are learning cells that learn the Falsehood pattern due to the repeated application of patterns PA=0.0 and PB=0.0 at the inputs as consequence. When the learning process in the primary cells C1 and C2 is completed, the Degree of Evidence μ1=1.0 is presented at the output of the Analytical Connection Cell C3. This value was learned by the Learning cell C4 resulting in the Degree of Evidence μER=1.0. The value of the Degree of Evidence μER resulting from the Learning is sent to the Selective Competition Unit passing first by the Pattern Activation Unit. In the Learning process, the μER signal does not suffer the influence of the Pattern Activation Unit. Thus, when patterns PA= 0.0 and PB= 0.0 appear at the output, it means that the network module was trained; acquired knowledge and the process is finalized. The Primary Learning Cells C1 and C2 keep their Falsehood pattern characteristics learned in the training, hence every time values P A=0.0 and PB=0.0 appear at the PANNet inputs, the Neural System of this module responds with a Resultant Degree of Evidence μER= 1.0.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
289
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 12.6 Training of the first module for patterns PA= 0.0 and PB = 0.0 in a PANNet.
Values applied close to these are captured and analyzed by the System, resulting in a Degree of Evidence of real value between 0.0 and 1.0, which in turn will suffer a comparison with the results from the other modules that will be installed and trained in the PANNet. The same procedures are done for different patterns applied on the PANNet. Therefore, the training follows the procedure described below: a) The module to be trained is isolated. b) The patterns are applied at the input until they appear at the output. c) All the trained modules are connected in parallel. For this example, we saw that a total of four trained modules that will be interconnected will be obtained in the end. Figure 12.7 presents the configuration of the PANNet where there are four trained modules to recognize four pairs of patterns. When one of these pairs of patterns is presented at the input, one of the trained modules will present the greater Resultant Degree of Evidence.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
290
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
As all the resultant signals are compared in the Competition Unit, the module that recognized the patterns will be the one to activate the pattern of the Crossing Cells. These cells were configured with the primary cells in the Learning process.
Figure 12.7 Modules of the Paraconsistent Artificial Neural Network for Patterns Recognition.
12.3 Logical Reasoning Module for the Control of a PANNet We saw that the Paraconsistent Artificial Neural Network (PANNet) is composed of modules that represent interconnected knowledge. They perform analysis and obtain Degrees of Evidence values that make the conclusion generation possible. To make the interconnections between the modules and carry out the control of the PANNet, the
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
291
Paraconsistent Artificial Neural Systems (PANSs) prepared for these functions are used. We will now study one of these systems of the control module, called the Paraconsistent Artificial Neural Network of Logical reasoning (PANSLR). 12.3.1 The Paraconsistent Artificial Neural Systems of Logical Reasoning (PANNLR)
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
The Paraconsistent Artificial Neural Systems of Logical Reasoning are the components of the modules that carry out the control by doing the logic connections in the network and are responsible for the inference process. Figure 12.8 shows these interconnections among the modules of a typical PANNet.
Figure 12.8 Interconnections among the modules of a typical PANNet.
The PANSLR is a Paraconsistent Artificial Neural System (PANS) that has the function of inferring and connecting the modules of Analysis and Knowledge Acquisition of the Paraconsistent Artificial Neural Network (PANNet). We will present the PANSs of Logical Reasoning that performs the logical connections most used by a Neural Network. They are the types: “AND”, “OR” and “OR-Exclusive” (EXOR).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
292
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
We saw that the Paraconsistent Artificial Neural Network (PANNet) receives information patterns that are analyzed by a reasoning system capable to infer by connecting blocks for the treatment of signals. These blocks have distinct functions such as analysis, memorization and knowledge acquisition, with signal treatment done in a parallel and distributed way, according to the brain functioning. In the proposed architecture of the PANNet, the control module will follow the theoretical procedure of the PAL2v. The interconnection among the network modules is done through the Resultant Degree of Evidence μE, which is the value of the output of the Paraconsistent Neural System of Logical Reasoning (PANSLR). This way, the value of the Resultant Degree of Evidence μE, controls which module will be active and which will be inactive. This may be done by the adjustment of the Certainty Tolerance Factor CerTF that will act on each Analytical Connection Cell. A high value Resultant Degree of Evidence μER at the output of a PANSLR releases for analysis the module which has its output connected to Certainty Tolerance Factor CrtTF. A low resultant Degree of Evidence μE at the output of the PANSLR will impose restrictions to the analysis performed by its correspondent module interconnected to its output. If the Resultant Degree of Evidence μE is zero, the interconnected module in completely inactive presenting an undefined signal of value 0.5.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
12.3.2 Configuration of the Paraconsistent Artificial Neural Network System of Logical Reasoning (PANSLR) The Logical Reasoning Systems constructed with Paraconsistent Artificial Neural Cells perform the primitive logical functions of the network such as AND, OR and EXOR, etc. The basic configuration of the PANSLR is composed of a group of Learning and Analytical Cells. The Learning Cells suffer a training process for to activate certain sequences of patterns at the inputs, and the Analytical Cells find the detected values through the Degree of Evidence equation. After the proper training, the cells will be qualified to distinguish different pattern sequences so that they can be compared by a Decision Cell which, in turn, will determine the correspondent value at the output, according to the PANSLR logical function. Figure 12.9 shows the basic structure to form a two input PANS composed of four Learning Cells and four Analytical Cells. The component cells from C1 to C4 are the Learning cells that by receiving repetitive patterns at the inputs present maximum output value equal to 1.0. Cells C 1 and C3 are prepared to learn the Truth patterns and cells C2 and C4 are prepared to learn the Falsehood patterns. The component cells from C5 to C8 are Analytical cells, in this configuration, are already conveniently interconnected to respond to determined combinations of applied patterns at two inputs, so that their output values will be maximum if they find the correspondent pattern combination.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
293
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 12.9 Basic Structure of Logical Reasoning PANS for two inputs.
12.3.3 Paraconsistent Artificial Neural System of Logical Reasoning of minimization (PANSLRMin) The Paraconsistent Artificial Neural System of Logical Reasoning of minimization (PANSLRMin) uses the basic structure presented in the previous figure where two Logic Connection Cells of maximization (OR) and one Decision Cell without the complement on the Unfavorable Degree of Evidence are added. Figure 12.10 shows a configuration of a Paraconsistent Artificial Neural System of Logical Reasoning of minimization (PANSLRMin). After being trained, it has the function of presenting the lowest value output signal among those presented at the inputs. In this configuration it is verified that the Learning, Analytical and Decision Paraconsistent Artificial Neural Cells will perform the signal processing using the PAL2v equations.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
294
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
Figure 12.10 Basic structure of the PANS of Logical Reasoning of minimization (PANSLRMin)
The training process is done individually in each System configuration. Initially, Learning Cells C1 and C3 detect the Truth patterns repeatedly applied at the inputs. In this process they are trained to present the Degree of Evidence 1 at the output when patterns μ1A = 1.0 and μ1B = 1.0 are applied at the inputs. The next procedure is to perform the training so that Learning cells C 2 and C4 recognize the Falsehood Patterns μ1A = 0.0 and μ1B = 0.0 at the inputs and present a maximum output value equal to 1.0. The training will be completed when the cells also recognize patterns μ1A=1.0 and μ1B=0.0 and μ1A=0.0 and μ1B=1.0 applied at the inputs.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
295
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
After the separate training, the input cells are interconnected, according to the figure 12.10 and its outputs are connected to the Analytical Cells. The Analytical Cells transfer, through the maximization cells C9 and C8, the signal of greatest value to be analyzed by Decision Cell C11. The Simple Logic Connection Cells of maximization C9 and C8 let only the Degrees of Evidence of greatest value pass. This value is applied at the input of Decision Cell C11 as an Unfavorable Degree of Evidence. In cell C11, the signals are analyzed and compared to the Decision Factor (DecTF) which will determine the conclusive output value. Figure 12.11 shows a table with values obtained from the application of patterns of different values at the PANSLRMin inputs. The letters correspond to the values found in the determined points of the PANSLRMin configuration shown in the previous figure.
Figure 12.11 Results of the test performed with values applied as patterns at the PANSLRMin inputs.
In the PANSLRMin, the DecTF is externally adjusted and may be the output of the analysis done by other devices in the network. For example, let’s consider that the Decision Tolerance Factor is adjusted to 0.5 thus μE values obtained above 0.5
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
296
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
establish a “True” output with μEMim = 1.0, and values below 0.5 establish a “False” value μEMim = 0.0. For values of μE obtained at the output C11 equal to 0.5, the PANS establishes an “Indefinition” of value μEMim = 0.5. On the results presented on the table, it is considered that all the Learning cells of the configuration had been previously trained for the recognition of their corresponding patterns. The Decision Factor (DecTF) of Cell C11 is considered as being fixed at 0.5, hence it is verified from the results of the table that, every time one of the patterns applied at the inputs has an undefined value, and it provokes an indefinition at the output. The values indicate that the PANSLRMin performs a minimization of the PANNet; therefore it will make interferences by using the connective AND.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
12.3.4 Paraconsistent Artificial Neural System of Logical Reasoning of Maximization (PANSLRMax) The Paraconsistent Artificial Neural System of Logical Reasoning of Maximization (PANSLRMax) uses the same basic structures of the reasoning systems shown in figure 12.10 with the difference that now the maximization process of the signals obtained by the Analytical Cells is done. The PANSLRMAX is composed of ten cells capable of analyzing pattern sequences at two inputs. Therefore, Simple Logical Connection Cells are used for a maximization process to treat the output signals of the basic structure of the Logical Reasoning System. The functioning and the analysis performed by the Decision Cell C11 are the same as exposed for the PANS of Logical Reasoning of Minimization in the previous configuration. In the initial learning treatment cell C1 and C3 detect the Truth patterns applied repeatedly at the inputs. These cells are trained to present Degree of Evidence 1 at the output when patterns μ1A = 1.0 and μ1B = 1.0 are applied at the inputs. Next, Learning cells C2 and C4 are trained to recognize falsehood patterns μ1A = 0.0 and μ1B = 0.0 at the inputs and present the maximum output value equal to 1.0. The training will only be completed when the cells also recognize patterns μ1A=1.0 and μ1B=0.0 and μ1A=0.0 and μ1B=1.0 applied at the inputs. After the separate training, the inputs of the cells are interconnected, as shown in figure 12.12 and their outputs are connected to the Analytical Cells. The Analytical Cells transfer, through maximization cells C9 and C8, the signal of greatest value to be analyzed by Decision Cell C11. The Simple Logic Connection Cells of maximization C8 and C9 let only the Degrees of Evidence of greatest values pass. This value is applied at the input of Decision Cell C11 as an Unfavorable Degree of Evidence. In cell C11, the Decision Factor DecTF will determine the conclusive output value. Figure 12.12 shows the proposed PANSLRMAX.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
297
Figure 12.12 Basic structure of the PANS of Logical Reasoning of Maximization (PANSLRMAX.)
The table of figure 12.13 shows the results obtained with the application of significant values at the PANSLRMax inputs. The letters correspond to the values found in determined points of the PANSLRMax configuration shown in the previous figure.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
298
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 12.13 Results of tests with values applied as patterns at the PANSLRMax inputs.
The values in the table indicate that the PANSLRMax performs the function of maximization in the PANNet; therefore it will make interferences by using the connective OR.
12.3.5 Paraconsistent Artificial Neural System of Exclusive OR Logical Reasoning (PANSExORLR) The Paraconsistent Artificial Neural System of Exclusive OR Logical Reasoning – PANSExORLR is composed of the basic structure of the reasoning PANSs, where two Simple Logic Connection Cells for a maximization process and one Decision Cell are added. Figure 12.14 shows the configuration of the Exclusive OR Logical Reasoning PANS with its eleven composing cells.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
299
Figure 12.14 PANSLREXOR of Logical Reasoning for the Exclusive OR function
The sequences of patterns applied at the inputs, which in an exclusive process of OR, will result in 0.0 are: μ1A = 1.0 and μ1B = 1.0, or μ1A = 0.0 and μ1B = 0.0. These combinations are maximized in cell C10 and will be analyzed as Unfavorable Degrees of Evidence by Decision Cell C11. The combinations of sequences that will result in 1.0 are: μ1A = 1.0 and μ1B = 0.0, or μ1A = 0.0 and μ1B = 1.0. These combinations also have their values maximized by cell C9 and by Decision Cell C11, they will be analyzed as Unfavorable Degrees of Evidence. The PANSLREXOR, as well as the other Paraconsistent Artificial Neural Systems presented, may receive at their input any real values in the closed interval [0,1]. The table of figure 12.15 shows the values found in a test by applying significant values at the inputs of the Exclusive OR Logical Reasoning Paraconsistent
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
300
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Artificial Neural System (PANSLREXOR). The letters correspond to the points of the configuration presented in the picture of figure 12.16.
Figure 12.15 Results of the tests carried out with values applied as patterns.
12.3.6 Paraconsistent Artificial Neural System of Complete Logical Reasoning (PANSCLR) In the presented systems, the initial configuration is the same; hence the initial analysis may be used for the acquisition of various functions by the same PANS, which will be called Paraconsistent Artificial Neural System of Complete Logical reasoning (PANSCLR). The PANSCLR produces, simultaneously the three logical functions: AND, OR and EXOR, previously studied. The PANSCLR is presented in figure 12.16.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
301
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Figure 12.16 Paraconsistent Artificial Neural System of Complete Logical Reasoning - PANSCLR.
The PANSCLR was obtained by a combination of various functions using the same structure, being configured with 16 cells, properly interconnected so that every output Decision Cell presents a signal according to the projected function. On the proposed configuration of the PANSCLR, the system responds to three functions AND, OR and EXOR; however, other functions may be adapted using the same basic structural analysis, or still, installing various similar modules in parallel so that the entire logical analysis may be done simultaneously.
12.4 Final Remarks Through the studies of the proposed architecture, it is verified that the Paraconsistent Artificial Neural Networks (PANNets) in an innovating and efficient way open great possibilities of diversity in the area of analysis and knowledge acquisition. In the Paraconsistent Artificial Neural Networks, the modules of
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
302
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
knowledge acquisition are composed of Neural System of various functions such as analysis, memorization and signal treatment. All these Systems are inserted and installed in parallel in the knowledge acquisition Module that, controlled by logical reasoning Systems, form an extensive network, capable of modeling efficiently various biological functions of the brain. The architecture for the signal processing of the PANNet, besides being efficient, it is easily implemented because the equations are simple, making the construction of algorithms easy to be programmed by common tools of Software applications. The PANNet modules may be trained with different configurations, conditioned Learning, detection of patterns similarity and other connection functions, typical of the functioning of the human brain. In the architecture of the PANNet, the Certainty Tolerance Factors (Cer TF) and the Contradiction Tolerance Factor (CtrTF) that adjust each cell may change the characteristics of the network, making subjects of greater responsibility able to be analyzed with greater rigor. The Learning factor (lF) (and the unlearning factor ulF), that permit the Learning cells to learn (and unlearn) with greater or smaller speed, introduce the possibilities of simulations of memory of short and long terms, in a distributed way on the network. We saw that on this proposition of PANNet the inference process between several blocks is elaborated by Paraconsistent Artificial Neural Systems that compose the reasoning System. In this chapter, it was also studied how the Systems of Logical Reasoning PANSLR are constructed, using the interconnections between the Paraconsistent Artificial Neural Cells (PANCs) with configurations especially projected. The procedures for the acquisition of the Reasoning Systems shown in this chapter may be used for the acquisition of other modules capable of elaborating many other logical functions. The modules of logical reasoning may be interconnected composing a global System capable of processing to signal treatments in a way similar to the functioning presented by the human brain. The results of the rehearsal demonstrate the efficiency of the PANSLR that elaborate the main functions such as: “AND”, “OR” and “EXOR”. Every configuration presented brings possibilities of adaptation for the effectuation of other functions, making itself small modifications of the topology. With the results found in the rehearsal it was proposed a Complete Paraconsistent Artificial Neural System (PANSCLR) that in various logical functions can be grouped in an only module with savings of the component cells. The results obtained show that the proposed architecture of the PANNet, with its Paraconsistent Artificial Neural Systems, is efficient on the treatment of signals generated of the Uncertain Knowledge allowing structured projects on the PAL2v fundaments, tolerant to flaws and simple of being projected through the conventional computer language.
Exercises 12.1 Define the Artificial Neural Networks. 12.2 Describe the processing differences between the computer and the human brain. 12.3 Explain the similarities found between the processing of the human brain and the analysis of the PAL2v. 12.4 How is constructed the Architecture proposed on the Paraconsistent Artificial Neural Networks?
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Chapter 12. Architecture of the Paraconsistent Artificial Neural Networks
303
12.5 Make a brief description of a PANNet. 12.6 Sketch the configuration and describe the functioning of a Paraconsistent Artificial Neural Unit of Learning, primary and pattern consultation. 12.7 Sketch the configuration and describe the functioning of a Paraconsistent Artificial Neural Unit of pattern activation. 12.8 Sketch the configuration and describe the functioning of a Paraconsistent Artificial Neural Unit of selective competition. 12.9 Describe the functioning and sketch the configuration of a Paraconsistent Artificial Neural System of knowledge acquisition (PANSKA). 12.10 Elaborate in programming language C, or in another language of usual programming, the executable program of the Paraconsistent Artificial Neural System of knowledge acquisition (PANSKA). 12.11 Describe the functioning and sketch the configuration of a Paraconsistent Artificial Neural System of Logical reasoning (PANSLR). 12.12 Elaborate in programming language C, or in another language of usual programming, the executable program of the Paraconsistent Artificial Neural System of Logical reasoning (PANSLR). 12.13 Sketch the configuration and describe the functioning of a Paraconsistent Artificial Neural System of logical reasoning of minimization (PANSLRMin). 12.14 Elaborate in programming language C, or in another language of usual programming, the executable program of the Paraconsistent Artificial Neural System of Logical Reasoning of Minimization (PANSLRMin). 12.15 Sketch the configuration and describe the functioning of a Paraconsistent Artificial Neural System of Logical Reasoning of Maximization (PANSLRMAX). 12.16 Elaborate in programming language C, or in another language of usual programming, the executable program of the Paraconsistent Artificial Neural System of Logical Reasoning of Maximization (PANSLRMAX). 12.17 Describe the functioning and sketch the configuration of an Exclusive OR Logical Reasoning Paraconsistent Artificial Neural System (PANSLREXOR). 12.18 Elaborate in programming language C, or in another language of usual programming, the executable program of the Exclusive OR Logical Reasoning Paraconsistent Artificial Neural System (PANSLREXOR). 12.19 Describe the functioning and sketch the configuration of a Paraconsistent Artificial Neural System of Complete Logical Reasoning (PANSCLR). 12.20 Elaborate in programming language C, or in another language of usual programming, the executable program of the Paraconsistent Artificial Neural System of Complete Logical Reasoning (PANSCLR).
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
304
Final Comments
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
Introduction With the aim of providing quality to the decision-making analysis Systems, there has been an increase in the researches of the applications of different types of logics. In large research centers, new forms of giving a distinguished treatment to the pieces of information that get to the databank for analysis have been studied. Due to the technological advances, the capacity of storing information in memory banks is getting higher and higher. This amount of information is very large and besides coming from several sources, is are likely to be impregnated with inconsistencies. When better reliability is desired in the treatment of these uncertain pieces of information, the digital system projects are able to work on new types of logics, whose basic theoretical concepts are more flexible than the Classical. To contribute to the resolution of these problems, we present, in this book, Uncertainty Treatment methods with the PAL2v. In the first chapters, with the purpose of qualifying the reader for the development of new projects, we started with explanations about the fundamentals of PAL2v through examples. From these fundamentals, in the following chapters, techniques and proper methodologies of applications with Paraconsistent Analysis Nodes (PANs) were developed. Hence, these methods will enable and motivate the reader to new projects of Uncertainty Treatment in several fields of knowledge. It can be noted from the chapters where the problem of Uncertainty Treatment was particularly detailed that these system, projected with Non-Classical Logics, like the Paraconsistent Logic used here are adaptable to the resolution of complex problems. Besides the Paraconsistent Analysis Network (PANet) for Uncertainty Treatment, we also introduce the Paraconsistent Artificial Neural Networks (PANNets). The representation of a PAN as a Paraconsistent Artificial Neural Cell, components of the Artificial Neural Networks (RNA) opens a vast research field based on the theory of the brain functional process. Besides the applications in robotics and other Artificial Intelligence systems, the interest in researches on Paraconsistent Logic in the form of Neural Networks stretches to various other fields of knowledge, like Genetic Engineering, Neurology, Psychology and Biology. In the area of Engineering and Computer Science, Paraconsistent Artificial Neural Network with the characteristics of parallel processing is a good option to solution complex problems in pattern recognition, Expert Systems and optimization of dynamic systems in the area of Automation and Control. As it was done in Artificial Neural Networks, the studies of the Paraconsistent Artificial Neural Networks have brought excellent results for the classification and pattern recognition, and concrete new applications in Artificial Intelligence will soon be accomplished. In this book, we presented the fundamentals of Paraconsistent Artificial Neural Networks, where it can be verified that the Paraconsistent Annotated Logic
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Final Comments
305
(PAL), classified as Non-classical is a good solution to structure types of neural networks. It is known that in some cases, the direct applications of the Artificial Neural Network theories are difficult due to several reasons. When the analyses are made based only on evidences, or uncertain knowledge, the Systems that utilize Classical or binary Logic are ineffectual or even unable of being applied. Therefore, the results of the applications of PAL2v methodology in Paraconsistent Artificial Neural Networks present distinguished advantages, as it was shown in this book.
E.1 Applications As soon as Paraconsistent Logic appeared, a logic that withstands contradictions in its fundamental concepts, there has been a search for means and ways to apply it to AI systems, which could treat contradictions in a practical and efficient way. The application methodology of Paraconsistent Logic through its extended form to PAL2v, as presented in this book, is a most successful one. Furthermore, methodology with applications that utilize PANet and PANNet brings the potentiality for use in diverse fields of knowledge. The Paraconsistent Annotated Logic has become object of researches in several institutions, producing relevant research work for implementation in Expert Systems and Robotics that use Paraconsistent Analysis Network (PANet) and Paraconsistent Artificial Neural Networks (PANNet). With the purpose of showing some of the already developed themes and how they may contribute by bringing new ideas of applications, some of the accomplishments are listed below.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
1- Emmy Robot, the first robot that functions with Paraconsistent Logic Control System. Autonomous Mobile Robot constructed in 1999. In this project, a Control System called ParaControl uses the fundamentals of the PAL2v, carries out the analysis of two signals originated from ultra-sound sensors. The result of this paraconsistent logic treatment is used to make the robot stray away from obstacles. This work was presented at the Polytechnic School of São Paulo University - POLI-USP to illustrate the doctorate thesis of Da Silva Filho, J.I., “Methods of interpretation of the Paraconsistent Annotated Logic with annotation of two values-PAL2v with construction of Algorithm and implementation of Electronics Circuits” EPUSP, Doctorate thesis, São Paulo, Brazil,1999.
2- Paraconsistent Expert Systems for Pattern recognition in speech This project deals with the construction of a Paraconsistent Expert Systems, which is capable of recognizing sound patterns from a human being’s spoken language. da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
306
Final Comments
This computational program was built with Paraconsistent Artificial Neural Networks and was used in Master’s Dissertation in 2002 at the Polytechnic School of São Paulo University (POLI-USP): [Prado, J A. Paraconsistent Artificial Neural Networks and their utilization for pattern recognition, Master’s Dissertation, POLIUSP, São Paulo, Brazil, 2002] 3- Case-based Paraconsistent Reasoning System – CBPRS for the Re-establishment of Electrical Power Systems This project deals with the use of the PAL2v interpretation methodology in one more dimension that is, stepping into PAL3v and PAL4v. Thus an Expert System was developed to help in the diagnosis of Reestablishment of Electrical Power Systems. This work was developed in the Doctorate thesis presented in 2003 at UNIFEI - Itajubá Federal University: [MARTINS, H.G., The Four-valued Annotated Paraconsistent Logic – 4vAPL Applied to Case-based Paraconsistent Reasoning System for the Reestablishment of Electrical Power Systems –UNIFEI – Doctorate Thesis, Itajubá, MG, Brazil, 2003].
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
4- Emmy II with Microprocessed Paraconsistent Logic Controller - second version. Autonomous Mobile Robot built in 2004. In this second version the robot’s Paraconsistent Control Systems are microprocessed. Various resources of signal treatment utilizing the principles of the PAL2v are added to its functioning through microprocessor programs. This project was presented at the defense of a Master’s Degree at UNIFEI- Itajubá Federal University: [TORRES, C. R., Paraconsistent Intelligent System for Autonomous Mobile Robot Control, Master’s Dissertation, Itajubá Federal University Federal de Itajubá - UNIFEI, Itajubá, Brazil, 2004].
5- Signal Classifier System utilizing Paraconsistent Artificial Neural Networks (PANNets). This project deals with a Paraconsistent Expert Systems constructed with Paraconsistent Artificial Neural Networks (PANNets) to carry out pattern recognition. The Paraconsistent System Classifying of signals (PSCs), constructed in 2003, carries out pattern recognition in amplitude and frequency signal forms (periodic functions). It is able to learn 4 different patterns and male the classification through characteristic families. This work was presented for the defense of a Master’s Degree at UFU - Uberlândia Federal University: [MARIO, M.C, A Proposal of Application of Paraconsistent Artificial Neural Networks as Signal Classifier utilizing Functional Approximation – Master’s Dissertation -UFU, Uberlândia-MG, Brazil, 2003].
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Final Comments
307
6- Character Recognition System with Paraconsistent Artificial Neural Networks (PANNets) The Paraconsistent Character Recognizer System of several sources (PCRsn) is an Expert Systems constructed with Paraconsistent Artificial Neural NetworksPANNets. This research work deals with the development of a PANNet to perform the recognition of characters from several sources, even if they come impregnated with noises. The Paraconsistent Character Recognizer System (PCRsn) was presented in 2003 for the defense of a Master’s Degree at UFU- Uberlândia Federal University: [FERRARA, L.F.P. Artificial Neural Networks Applied to a Character Recognizer, Master’s Dissertation - UFU, Uberlândia-MG, Brazil, 2003]
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
7- Paraconsistent Expert Systems to support the diagnosis of cephalometric analyses. The Paraconsistent System was constructed with Paraconsistent Artificial Neural Networks (PANNets) to analyze cephalometric measurements in order to support orthodontics diagnostics. This Paraconsistent System was made to use of the information provided by cephalometric analyses for performs pattern recognition and from determined parameters, establishes the best to be worn by the patient in the teeth treatment. This work was developed for the doctorate thesis presented in 2006 at the Medicine College of São Paulo University -USP: [Mario, M. C. “Analysis of the skull measurement variables utilizing Paraconsistent Artificial Neural Networks, Doctorate Thesis, FM-USP, São Paulo, Brazil, 2006] We have presented here just some of the outstanding work with relevant results, which used the theory and the methodology presented in this book. Nonetheless, nowadays, in several research centers, many other projects are being developed. The main developing researches and applications of the PAL2v with Paraconsistent Analysis Networks (PANet) and Paraconsistent Artificial Neural Networks (PANNets) are in the areas of: - Signal Classifying Systems utilizing Paraconsistent Artificial Neural Networks (PANNets). -Autonomous Mobile Robots with Paraconsistent Logic Controllers configured with PANNets. -Character Recognizer System with Paraconsistent Artificial Neural Networks (PANNets). - Development of Paraconsistent Expert Systems to support medical and odonthological diagnoses. - Applications of Paraconsistent Artificial Neural Networks (PANNets) in Image Recognition for medical, biological, and odonthological diagnoses. - Development of Control Systems and Paraconsistent Simulators for Autonomous Mobile Robots and Control Systems. - Applications of the Paraconsistent Artificial Neural Networks (PANNets) in Intelligent Systems for Sound and Image Pattern Recognition and Modeling of the brain. - Applications of Expert Systems to support the Re-establishment of Electrical Energy Systems and transmission lines de after contingency. - Applications of Paraconsistent Logic in Expert Systems for the analysis of sea water and air pollution levels.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
308
Final Comments
- Applications of Paraconsistent Logic in projects of aid devices to the visually impaired. - Development of Expert Systems with Paraconsistent Logic for State Estimator in Power Systems and Electrical Power Distribution Networks.
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
E.2 Final Remarks In the chapters of this book it was demonstrated that the application of the Paraconsistent Annotated Logic (PAL) for Uncertainty Treatment conveys a more efficient tool because other logical states not permitted by the Classical Logical are accepted and considered in its structure. With the Uncertainty Treatment with Paraconsistent Logic, it is possible to obtain logical states that comprehend real situations like contradiction and indefinition. The non-classical structure of the Paraconsistent Annotated Logic is flexible to deals with information representative of uncertain situations and it is compatible with the behavior of the neurons. Therefore, it is more appropriate to model all the uncertain knowledge treatment process that the brain executes with extreme ability. In the future, the Paraconsistent Analysis Networks, constructed with the PANs, and the Paraconsistent Artificial Neural Networks constructed of PANU (ParaPerceptrons) will be able to reproduce the human being’s mental activities for decision making more easily through computational programs. Shortly, these networks will perform all the pattern recognition process, inferences, analyses and decision makings, starting from a number of alternatives, very similar to the processing that happens in the human brain. It is a concrete fact that, with the fundamentals and concepts studied, which bring the relevant characteristics from the analyses of the test results, great possibilities opens up in the research field and future applications of Paraconsistent Analysis Networks. The results presented make it clear that the procedures for Paraconsistent Analysis Network projects are dynamic and make the modeling of several fields of knowledge easier. Hence, the Paraconsistent Analysis Networks (PANet) and the Paraconsistent Artificial Neural Networks (PANNet) are effective contributions to researches in Artificial Intelligence.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
309
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
References [1] ABE J. M “Fundamentos da Lógica Anotada” Tese de Doutoramento, in Portuguese FFLCH/USP - São Paulo, 1992. [2] ABE, J.M. & DA SILVA FILHO, J.I., Simulating Inconsistencies in a Paraconsistent Logic Controller, International Journal of Computing Anticipatory Systems, vol. 12, ISSN 13735411, ISBN 2-96002621-7, 315-323, 2002. [3] ABE, J.M., DA SILVA FILHO, J.I. & NAKAMATSU, K. A Logical System for Reasoning with Inconsistent Deontic Modalities, International Journal of Computing Anticipatory Systems, vol. 12, ISSN 1373-5411, ISBN 2-9600262-1-7, 25-34, 2002. [4] ABE, J.M. & DA SILVA FILHO J.I., Manipulating Conflicts and Uncertainties in Robotics, MultipleValued Logic and Soft Computing, V.9, ISSN 1542-3980, 147-169, 2003. [5] ABE, J.M., DA SILVA FILHO, J.I. & NAKAMATSU, K. Paraconsistent multimodal systems Fn, Proceedings, The 6th World Multiconference on Systemics, Cybernetics and Informatics, SCI’2002, Vol. XVI, Computer Science III, Edts. N. Callaos, T. Ebisuzaki, B. Starr, J.M. Abe & D. Lichtblau, organized by IIIS – International Institute of Informatics and Systemics, ISBN: 980-07-8150-1, Orlando, Florida, USA, 197-201, 2002. [6] ABE, J.M., SCALZITTI, A. & NAKAMATSU K. & DA SILVA FILHO, J.I. Incorporating Time In Paraconsistent Reasoning, Proceedings, The 6th World Multiconference on Systemics, Cybernetics and Informatics, SCI’2002, Vol. XVI, Computer Science III, Edts. N. Callaos, T. Ebisuzaki, B. Starr, J.M. Abe & D. Lichtblau, organized by IIIS – International Institute of Informatics and Systemics, ISBN: 980-07-8150-1, Orlando, Florida, USA, 216-220, 2002. [7] ABE, J. M.; DA SILVA FILHO, J. I. CARVALHO, F. R. “ Para-Analyzer and Its Applications” Advances in Logic Based Intelligent Systems, 2005 Select Papers of LAPTEC 2005 - Frontiers in Artificial Intelligence and Its Applications, pp 153-160 ISSN 0922-6389 IOS Press, Amsterdan, Berlim, Oxford, Tokyo, Washington,DC [8]ALCHOURRÓN, C. & MAKINSON, D., 1981, “Hierarchies of regulations and their logic”, In R. Hilpinen, editor, New Studies in Deontic Logic, pp. 123-148, D. Heidel. [9] ANAND, R. & SUBRAHMANIAN, V.S. “A Logic Programming System Based on a Six-Valued Logic” AAAI/Xerox Second Intl. Symp. on Knowledge Eng. -Madri-Espanha, 1987. [10] ARRUDA, A.I., DA COSTA, N.C.A. & CHUAQUI, R. Proceedings of The Third Latin-American Symposium on Mathematical Logic, North Holland, Amsterdam, 1977. [11] CRESSWELL, M.J., 1973, “Logics and Languages”, London, Methuen and Co. [12] NAKAMATSU, K. & ABE, J.M. & SUZUKI, A. Extended Vector Annotated Logic Programming and its Applications to Robot Action Control and Automated Safety Verification, Hybrid Information Systems, Advances in Soft Computing, Editors A. Abraham & M. Köppen, Physica-Verlag, A Springer-Verlag Company, ISBN 3-7908-1480-6, ISSN 16153871, 665-679, 2002. [13] AKAMA, S. & ABE, J.M. Natural Deduction And General Annotated Logics, The First International Workshop on Labelled Deduction (LD’98), Freiburg, Alemanha, 1-14, 1998. [14] BISHOP, C., Neural Networks For Pattern Recognition, Oxford Press, 1995. [15] BLAIR, H.A. & SUBRAHMANIAN, V.S. Paraconsistent Foundations for logic Programming, Journal of Non-Classical logic, 5,2, 45-53, 1988. [16] DA COSTA, N.C.A, On Theory of Inconsistent formal Systems, Notre Dame J. of formal logic, 15, 497510, 1974 [17] DA COSTA, N.C.A.& ABE, J.M. & SUBRAHMANIAN, V.S. Remarks on annotated logic, Zeitschrift f. math. Logik und Grundlagen d. Math. 37, pp 561-570, 1991. [18] DA SILVA FILHO, J.I., “Métodos de interpretação da Lógica Paraconsistente Anotada com anotação com dois valores LPA2v com construção de Algoritmo e implementação de Circuitos Eletrônicos” EPUSP, Tese de Doutoramento, São Paulo, 1999. [19] DA SILVA FILHO, J.I. & ABE, J.M. Para-Analyser and Inconsistencies in Control Systems, Proceedings of the IASTED International Conference on Artitficial Intelligence and Soft Computing (ASC’99), August 9-12, Honolulu, Hawaii, USA, 78-85, 1999. [20] DA SILVA FILHO, J.I. & ABE, J.M. Paraconsistent electronic circuits, International Journal of Computing Anticipatory Systems, vol. 9, ISSN 1373-5411, ISBN 2-9600262-1-7, 337-345, 2001. [21] DA SILVA FILHO, J.I. & ABE, J.M. Paraconsistent analyser module, International Journal of Computing Anticipatory Systems, vol. 9, ISSN 1373-5411, ISBN 2-9600262-1-7, 346-352, 2001. [22] DA SILVA FILHO, J.I. & ABE, J.M. Emmy: a paraconsistent autonomous mobile robot, in Logic, Artificial Intelligence, and Robotics, Proc. 2nd Congress of Logic Applied to Technology –
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
310
References
LAPTEC’2001, Edts. J.M. Abe & J.I. Da Silva Filho, Frontiers in Artificial Intelligence and Its Applications, IOS Press, Amsterdam, Ohmsha, Tokyo, Editors, Vol. 71, ISBN 1586032062 (IOS Press), 4 274 90476 8 C3000 (Ohmsha), ISSN 0922-6389, 53-61, 287p., 2001. [23] DA SILVA FILHO, J.I. & ABE, J.M. Para-Fuzzy Logic Controller – Part I: A New Method of Hybrid Control Indicated for Treatment of Inconsistencies Designed with the Junction of the Paraconsistent Logic and Fuzzy Logic, Proceedings of the International ICSC Congress on Computational Intelligence Methods and Applications CIMA’99, Rochester Institute of Technology, RIT, Rochester, N.Y., USA, ISBN 3-906454-18-5, Editors: H. Bothe, E. Oja, E. Massad & C. Haefke, ICSC Academic Press, International Computer Science Conventions, Canada/Switzerland, 113-120, 1999. [24] DA SILVA FILHO, J.I. & ABE, J.M. Para-Fuzzy Logic Controller – Part II: A Hybrid Logical Controlller Indicated for Treatment of Fuzziness and Inconsistencies, Proceedings of the International ICSC Congress on Computational Intelligence Methods and Applications CIMA’99, Rochester Institute of Technology, RIT, Rochester, N.Y., USA, ISBN 3-906454-18-5, Editors: H. Bothe, E. Oja, E. Massad & C. Haefke, ICSC Academic Press, International Computer Science Conventions, Canada/Switzerland, 106-112, 1999. [25] DA SILVA FILHO, J.I. & ABE, J.M. Para-Control: An Analyser Circuit Based On Algorithm For Treatment of Inconsistencies, Proc. of the World Multiconference on Systemics, Cybernetics and Informatics, ISAS, SCI 2001, Vol. XVI, Cybernetics and Informatics: Concepts and Applications (Part I), ISBN 9800775560, 199-203, Orlando, Florida, USA, 2001. [26] DA SILVA FILHO, J.I. & ROCCO, A. & MARIO, M.C.& FERRARA, L.F.P. Annotated Paraconsistent Logic applied to na Expert System Dedicated for Supporting in an Electric Power Transmission System Re-Establisment - IEEE Power Engineering Society – PSC 2006 Power System Conference and Exposition – pp 2212-2220, ISBN-1-4244-0178-X-Atlanta USA-2006 [27] FAUSETT, L., “Fundamentals of Neural Networks Architetures, Algorithms, and Applications”, Prentice Hall, 1994. [28] FERRARA, L.F.P. Redes Neurais Artificiais Aplicada em um Reconhecedor de Caracteres Dissertação de Mestrado in Portuguese -UFU, Uberlândia-MG, 2003 [29] FERRARA, L. F. P.; YAMANAKA, K.; DA SILVA FILHO, J. I. “ A System of Recongnition of Characters on Paraconsistent Artificial Neural Networks” Advances in Logic Based Intelligent Systems, 2005 Select Papers of LAPTEC 2005 - Frontiers in Artificial Intelligence and Its Applications, pp 127134 ISSN 0922-6389 IOS Press, Amsterdan, Berlim, Oxford, Tokyo, Washington,DC [30] FISCHLER, M.A. & O. FIRSCHEIN, “Intelligence The Eye, The Brain and The Computer” AddisonWesley Publishing Company, USA, 1987. [31] GALLANT, S.I., Neural Network Learning and Expert Systems, MIT Press, 1993. [32] HAACK, S., Deviant Logic, Cambridge University Press, Cambridge, 1974. [33] HASSOUN, M.H., Fundamentals of Artificial Neural Network, MIT Press, 1995. [34] HAYKIN, S. Neural Networks- A Comprehensive Foundation, Prentice Hall, New York, 1994. [35] HEBB, D. “The Organization of Behavior ” Wiley, New York, 1949. [36] MARIO, M.C, Proposta de Aplicação das Redes Neurais Artificiais Paraconsistentes como Classificador de Sinais utilizando Aproximação Funcional - in Portuguese Dissertação de Mestrado UFU, Uberlândia-MG, 2003 [37] MARIO, M. C. “Análise de variáveis craniométricas utilizando as Redes Neurais Artificiais Paraconsistentes” Tese de Doutoramento - in Portuguese FM-USP,São Paulo, 2006] [38] MARTINS, H.G., A Lógica Paraconsistente Anotada de Quatro Valores-LPA4v Aplicada em um Sistema de Raciocínio Baseado em Casos para o Restabelecimento de Subestações Elétricas –UNIFEI – Tese de Doutoramento- in Portuguese- Itajubá,MG, 2003 [39] MINSKY, M. & S. PAPERT, Perceptron: Introduction to computational geometry, 1969. [40] McCULLOCH, W & W. PITTS, “A Logical Calculus of the Ideas Immanent in Nervous Activity”, Bulletin of Mathematical Biophysics, 1943. [41] NELSON, D., Negation and separation of concepts in constructive systems, A. Heyting (ed.), Constructivists in Mathematics, North-Holland, Amsterdam, 208-225, 1959. [42] NAKAMATSU, K. & J.M. ABE, Reasoning Based On Vector Annotated Logic Programs, atas do CIMCA’99, International Conference on Computational Intelligence for Modeling Control and Automation, Edited by M. Mohammadian, IOS Press – Ohmsha, ISBN 90 5199 474 5 (IOS Press), Netherlands, 396-403, 1999. [43] NAKAMATSU, K., J.M. ABE & A. SUZUKI, An approximate reasoning in a framework of vector annotated logic programming, The Vietnam-Japan Bilateral Symposium on Fuzzy Systems And Applications, VJFUZZY’ 98, Nguyen H. Phuong & Ario Ohsato (Eds), HaLong Bay, Vietnam, 521-528, 1998. [44] PRADO, J A Redes neurais artificiais paraconsistentes e sua utilização para reconhecimento de padrões, Dissertação de Mestrado - in Portuguese, POLIUSP, São Paulo, 2002.
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
References
311
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
[45] RESCONI, G. & J.M. ABE, Multilevel uncertainty logic, Quaderni Del Seminario Matematico Di Brescia, no 14/97, Università Cattolica del Sacro Cuore e Università degli Studi di Brescia, 28p, Itália, 1997. [46] RICH, E.& KNIGHT, K. Artificial Intelligence, Makron Books, 2a ed., São Paulo, 1994. [47] ROSEMBLATT, Principles of Neurodynamics: Perceptron and the Theory of Brain mechanism, Spartan Books, New York, 1962. [48] SCALZITTI, A. & DA SILVA FILHO, J.I. & ABE, J.M. A formalization for signal analysis of information in annotated paraconsistent logics, in Logic, Artificial Intelligence, and Robotics, Proc. 2nd Congress of Logic Applied to Technology – LAPTEC’2001, Edts. J.M. Abe & J.I. Da Silva Filho, Frontiers in Artificial Intelligence and Its Applications, IOS Press, Amsterdam, Ohmsha, Tokyo, Editors, Vol. 71, ISBN 1 58603 206 2 (IOS Press), 4 274 90476 8 C3000 (Ohmsha), ISSN 0922-6389, 315-323, 287p., 2001. [49] SIEBERT, W. “Stimulus Transformation in Peripheral Auditory System in Recognizing Patterns” Ed. Murray Eden, MIT Press, Cambridge, 1968. [50] SUBRAHMANIAN,V.S. On the Semantics quantitative Logic Programs, Proc. 4th IEEE Symposium on Logic Programming, Computer Society Press, Washington D.C., 1987. [51] SYLVAN, R. & J.M. ABE, On general annotated logics, with an introduction to full accounting logics, Bulletin of Symbolic Logic, 2, 118-119, 1996. [52] TORRES, C. R., Sistema Inteligente Paraconsistente para Controle de Robôs Móveis Autônomos, Dissertação de Mestrado, Universidade Federal de Itajubá – UNIFEI- in Portuguese, Itajubá, 2004
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
This page intentionally left blank
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
This page intentionally left blank
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,
Copyright © 2010. IOS Press, Incorporated. All rights reserved.
This page intentionally left blank
da, Silva Filho, J. I., et al. Uncertainty Treatment Using Paraconsistent Logic : Introducing Paraconsistent Artificial Neural Networks,