Applications of Computational Intelligence in Multi-Disciplinary Research 9780128239780

238 116 6MB

English Pages [221] Year 2022

Report DMCA / Copyright


Table of contents :
Front Cover
Applications of Computational Intelligence in Multi-Disciplinary Research
Copyright Page
List of contributors
1 Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern
1.1 Introduction
1.2 Related works
1.3 Iris localization
1.4 Iris normalization
1.5 The proposed feature extraction scheme
1.6 Matching results
1.7 Performance evaluation
1.8 Conclusion
2 A novel crypt-intelligent cryptosystem
2.1 Introduction
2.2 Related work
2.2.1 Machine learning contributions in cryptology Analogy between machine learning and cryptography Application of machine learning in cryptography Application of machine learning in cryptanalysis Analysis of existing contributions of machine learning in cryptology
2.2.2 Genetic algorithm contributions in cryptology Applications of genetic algorithm in Cryptography Applications of genetic algorithm in cryptanalysis Analysis of existing contributions of genetic algorithms in cryptology
2.2.3 Neural network contributions in cryptology Applications of neural networks in cryptography Applications of neural networks in cryptanalysis Analysis of contribution of neural network in cryptology
2.2.4 Background of DNA cryptography Analysis of existing work in DNA cryptography
2.3 Proposed methodology
2.3.1 Proposed encryption scheme
2.3.2 Proposed decryption scheme
2.4 Discussion
2.5 Conclusion and future work
3 Behavioral malware detection and classification using deep learning approaches
3.1 Introduction
3.1.1 Digital forensics—malware detection
3.1.2 Malware evolution and its taxonomy
3.1.3 Machine learning techniques for malware analysis
3.1.4 Behavioral analysis of malware detection
3.2 Deep learning strategies for malware detection
3.2.1 Feature extraction and data representation
3.2.2 Static Analysis Byte code n-gram features Opcode n-gram features Portable executables String feature
3.2.3 Dynamic analysis
3.2.4 Hybrid analysis
3.2.5 Image processing techniques
3.3 Architecture of CNNs for malware detection
3.3.1 Preprocessing
3.3.2 Classification using CNNs
3.3.3 Evaluation
3.4 Comparative analysis of CNN approaches
3.5 Challenges and future research directions
3.6 Conclusion
4 Optimization techniques and computational intelligence with emerging trends in cloud computing and Internet of Things
4.1 Introduction
4.1.1 Introduction to optimization
4.1.2 Introduction to cloud computing with emphasis on fog/edge computing
4.2 Optimization techniques
4.2.1 An optimization problem Defining an optimization problem Elements of an optimization problem Classification of the optimization problem On the basis of types of constraints On the basis of the physical structure of the problem On the basis of the nature of the design variables On the basis of the nature of the equations (constraints and objective functions) On the basis of the separable nature of the variables On the basis of the deterministic nature of the variables On the basis of the permissible values of the decision variables On the basis of the number of objectives
4.2.2 Solution to the optimization problem Classical optimization techniques Advanced optimization techniques
4.3 Understanding fog/edge computing
4.3.1 What is fog?
4.3.2 Prelude to our framework
4.3.3 Our goal
4.3.4 Framework for fog computing
4.4 Optimizing fog resources
4.4.1 Defining optimization problem for fog layer resources
4.4.2 Optimization techniques used
4.5 Case studies
4.5.1 Case study I: floorplan optimization
4.5.2 Case study II: Gondwana—optimization of drinking water distribution system
4.6 Scope of advancements and future research
4.7 Conclusion
5 Bluetooth security architecture cryptography based on genetic codons
5.1 Introduction
5.1.1 Bluetooth
5.1.2 Bluetooth security architecture
5.2 Survey of literature
5.3 Plaintext-to-ciphertext conversion process
5.3.1 Basic workflow Encryption Decryption
5.3.2 Algorithm Encryption Plaintext to DNA/RNA codon conversion Promoter addition Generation of promoters Promoter addition Intron addition Intron number generation Position to place the introns Placing the introns at their positions Masking of the ciphertext Extra data Decryption Removal of the mask Removal of the introns Removal of the promoter Conversion of the ciphertext without the intron and promoter to plaintext
5.3.3 Analysis and discussion
5.4 Conclusion
5.5 Future work
6 Estimation of the satellite bandwidth required for the transmission of information in supervisory control and data acquis...
6.1 Introduction
6.2 Supervisory control and data acquisition systems
6.3 The very small aperture terminal networks
6.3.1 The satellite communication systems
6.3.2 Architecture very small aperture terminal networks
6.3.3 Connectivity
6.3.4 Multiple access
6.4 Algorithm for estimating the satellite bandwidth
6.4.1 Determining the bandwidth required for data transmission
6.4.2 Case study
6.4.3 Overview of some recent algorithms in detail
6.4.4 Validation of bandwidth calculations
6.5 Challenges and future work
6.6 Conclusions
7 Using artificial intelligence search in solving the camera placement problem
7.1 Introduction
7.1.1 The roles of visual surveillance systems
7.1.2 The camera placement problem from an artificial intelligence perspective
7.1.3 Chapter description
7.2 Background
7.3 Modeling the visual sensors
7.3.1 The sensor space modeling
7.3.2 The camera coverage modeling
7.3.3 The analysis of camera visibility
7.4 Solving the camera placement problem using artificial intelligence search
7.4.1 Generate and test algorithm
7.4.2 Uninformed search
7.4.3 Hill climbing strategy
7.5 Further discussion
7.5.1 The efficiency of the algorithms
7.5.2 The performance of the algorithms
7.6 Conclusion
8 Nanotechnology and applications
8.1 Introduction
8.2 Nanoscience and nanotechnology
8.3 Computational nanotechnology
8.3.1 Molecular modeling Molecular mechanics Quantum methods Semiempirical Molecular dynamics
8.3.2 Nanodevice simulation
8.3.3 Nanoinformatics
8.3.4 High-performance computing
8.3.5 Computational intelligence Genetic algorithms Artificial neural networks Fuzzy system
8.4 Applications of computational nanotechnology
8.4.1 Nanotube-based sensors and actuators
8.4.2 Nanoinformatics for drugs
8.4.3 Molecular docking
8.4.4 Nanotoxicology
8.4.5 Other applications
8.5 Conclusion
9 Advances of nanotechnology in plant development and crop protection
9.1 Introduction
9.2 Agriculture’s nanofarming: a modern frontier
9.3 Synthesis of green nanoparticles and its sources
9.4 Good distribution possibilities allowed by nanoparticles: a modern sustainable agriculture portal
9.5 Nanofertilizers: a good food supply for crops
9.6 Germination, field production, and efficiency enhancement of seed nanomaterials
9.7 Plant sensory systems and responses to radical climate change influences nanomaterials
9.8 Nanosensors and nanomaterials: perturbation detection and control
9.9 Pesticide-based plant safety nanomaterials
9.10 Nanotechnology in pesticides and fertilizers
9.11 Control of plant pests
9.12 Concluding remarks
Consent for publication
Conflict of interest
10 A methodology for designing knowledge-based systems and applications
10.1 Introduction
10.2 Related work
10.3 Design the knowledge-based system
10.3.1 The architecture of a knowledge-based system
10.3.2 The process for designing the knowledge-based system
10.4 Knowledge base and inference engine of a knowledge-based system
10.4.1 Design the knowledge base Organize the knowledge base Basic knowledge manipulations Updating the knowledge base Checking the consistency of the knowledge base Unification of facts
10.4.2 Design the Inference engine The process for designing the inference engine The principles of an inference engine Criteria of an inference engine The process for designing an inference engine The reasoning methods Forward chaining Backward chaining Reasoning with pattern problems and sample problems
10.5 Applications
10.5.1 Design an intelligent problem solver for solving solid geometry at high school Collect the knowledge domain Build the knowledge model Organize the knowledge base Design the inference engine Testing
10.5.2 Consultancy system for designing housing architecture Organize the knowledge base of the consultancy system Design the inference engine of the consultancy system Testing
10.6 Conclusion and Future work
11 IoT in healthcare ecosystem
11.1 Introduction
11.2 Applications of  Internet of Things in healthcare
11.2.1 Patient-centric IoT Remote patient care Pathology and fatal viral/bacterial diseases Critical and emergency patient care Food and workout monitoring Affective computing
11.2.2 Hospital-centric IoT applications Real-time location of medical equipment Deployment of medical staff Drugs management Reducing the charting time
11.2.3 IoT benefitting health insurance companies
11.2.4 Pharmaceutical governance
11.3 Implementation methodologies
11.3.1 Fog computing Architecture Smart IoT devices/applications Fog nodes Cloud Advantages
11.3.2 Edge computing Architecture IoT nodes/applications Edge nodes Cloud Advantages Empowering edge computing
11.4 Implementation models
11.4.1 Heart disease prediction
11.4.2 Healthcare IoT-based affective state mining using deep convolutional neural networks Electrodermal activity Electromyography Electrocardiogram
11.5 Challenges in healthcare IoT
11.5.1 Technology-oriented challenges Risking the patient’s life Incorrect results No planned downtime Need for a specialized tool to handle diversified protocols Remote places with a lack of infrastructure and connectivity
11.5.2 Adapting to remote healthcare and telehealth
11.5.3 Data security
11.6 Security issues and defense mechanisms and IoT
11.6.1 Security requirements in healthcare IoT Confidentiality Integrity Authentication
11.6.2 Attacks on IoT devices Sinkhole attack Blackhole attack Selecting forwarding attack (grayhole attack) Wormhole attack Sybil attack Denial-of-service attack
11.6.3 Defensive mechanism Key management User/device authentication and authorization Intrusion detection Fault tolerance Blockchain technology
11.7 Covid 19—how IoT rose to the global pandemic
11.7.1 About Covid 19
11.7.2 Decoding the outbreak and identifying patient zero
11.7.3 Quarantined patient care
11.7.4 Public surveillance
11.7.5 Safeguarding hygiene
11.7.6 IoT and robotics
11.7.7 Smart disinfection and sanitation tunnel
11.7.8 Smart masks and smart medical equipment
11.8 Future of IoT in healthcare
11.8.1 IoT and 5G
11.8.2 IoT and artificial intelligence
11.9 Conclusion
Back Cover
Recommend Papers

Applications of Computational Intelligence in Multi-Disciplinary Research

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Applications of Computational Intelligence in Multi-Disciplinary Research

Advances in Biomedical Informatics

Applications of Computational Intelligence in Multi-Disciplinary Research Edited by Ahmed A. Elngar Faculty of Computers and Artificial Intelligence, Beni-Suef University, Beni-Suef City, Egypt College of Computer Information Technology, American University in the Emirates, United Arab Emirates

Rajdeep Chowdhury Department of Computer Application, JIS College of Engineering, Kalyani, West Bengal, India

Mohamed Elhoseny College of Computing and Informatics, University of Sharjah, United Arab Emirates Faculty of Computers and Information, Mansoura University, Egypt

Valentina Emilia Balas Department of Automatics and Applied Software, Faculty of Engineering, “Aurel Vlaicu” University of Arad, Arad, Romania

Series Editor Valentina Emilia Balas Department of Automatics and Applied Software, Faculty of Engineering, “Aurel Vlaicu” University of Arad, Arad, Romania

Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2022 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN: 978-0-12-823978-0 For Information on all Academic Press publications visit our website at

Publisher: Mara Conner Acquisitions Editor: Chris Katsaropoulos Editorial Project Manager: Isabella C. Silva Production Project Manager: Selvaraj Raviraj Cover Designer: Greg Harris Typeset by MPS Limited, Chennai, India

Contents List of contributors

1. Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern



Prajoy Podder, M. Rubaiyat Hossain Mondal and Joarder Kamruzzaman Abbreviations 1.1 Introduction 1.2 Related works 1.3 Iris localization 1.4 Iris normalization 1.5 The proposed feature extraction scheme 1.6 Matching results 1.7 Performance evaluation 1.8 Conclusion References

2. A novel crypt-intelligent cryptosystem

1 1 3 4 6 7 11 11 13 14


Pratyusa Mukherjee and Chittaranjan Pradhan 2.1 Introduction 2.2 Related work 2.2.1 Machine learning contributions in cryptology 2.2.2 Genetic algorithm contributions in cryptology 2.2.3 Neural network contributions in cryptology 2.2.4 Background of DNA cryptography 2.3 Proposed methodology 2.3.1 Proposed encryption scheme 2.3.2 Proposed decryption scheme 2.4 Discussion 2.5 Conclusion and future work References

17 18 18 20 21 23 23 24 25 25 26 26

3. Behavioral malware detection and classification using deep learning approaches


T. Poongodi, T. Lucia Agnes Beena, D. Sumathi and P. Suresh 3.1 Introduction 3.1.1 Digital forensics—malware detection 3.1.2 Malware evolution and its taxonomy 3.1.3 Machine learning techniques for malware analysis 3.1.4 Behavioral analysis of malware detection 3.2 Deep learning strategies for malware detection 3.2.1 Feature extraction and data representation 3.2.2 Static Analysis 3.2.3 Dynamic analysis 3.2.4 Hybrid analysis 3.2.5 Image processing techniques 3.3 Architecture of CNNs for malware detection 3.3.1 Preprocessing 3.3.2 Classification using CNNs 3.3.3 Evaluation 3.4 Comparative analysis of CNN approaches 3.5 Challenges and future research directions 3.6 Conclusion References

4. Optimization techniques and computational intelligence with emerging trends in cloud computing and Internet of Things

29 30 32 32 33 35 35 36 38 38 38 41 41 41 42 42 43 43 43


Jayesh S Vasudeva, Sakshi Bhargava and Deepak Kumar Sharma 4.1 Introduction

47 v



4.1.1 Introduction to optimization 4.1.2 Introduction to cloud computing with emphasis on fog/edge computing 4.2 Optimization techniques 4.2.1 An optimization problem 4.2.2 Solution to the optimization problem 4.3 Understanding fog/edge computing 4.3.1 What is fog? 4.3.2 Prelude to our framework 4.3.3 Our goal 4.3.4 Framework for fog computing 4.4 Optimizing fog resources 4.4.1 Defining optimization problem for fog layer resources 4.4.2 Optimization techniques used 4.5 Case studies 4.5.1 Case study I: floorplan optimization 4.5.2 Case study II: Gondwana— optimization of drinking water distribution system 4.6 Scope of advancements and future research 4.7 Conclusion References

5. Bluetooth security architecture cryptography based on genetic codons

48 48 49 49 52 54 54 54 55 55 57 57 58 60 60

63 63 64 65


Asif Ikbal Mondal, Bijoy Kumar Mandal, Debnath Bhattacharyya and Tai-Hoon Kim 5.1 Introduction 5.1.1 Bluetooth 5.1.2 Bluetooth security architecture 5.2 Survey of literature 5.3 Plaintext-to-ciphertext conversion process 5.3.1 Basic workflow 5.3.2 Algorithm 5.3.3 Analysis and discussion 5.4 Conclusion 5.5 Future work References

67 67 67 69 71 71 72 78 79 79 80

6. Estimation of the satellite bandwidth required for the transmission of information in supervisory control and data acquisition systems 83 Marius Popescu and Antoanela Naaji Abbreviations 6.1 Introduction

83 84

6.2 Supervisory control and data acquisition systems 6.3 The very small aperture terminal networks 6.3.1 The satellite communication systems 6.3.2 Architecture very small aperture terminal networks 6.3.3 Connectivity 6.3.4 Multiple access 6.4 Algorithm for estimating the satellite bandwidth 6.4.1 Determining the bandwidth required for data transmission 6.4.2 Case study 6.4.3 Overview of some recent algorithms in detail 6.4.4 Validation of bandwidth calculations 6.5 Challenges and future work 6.6 Conclusions References

7. Using artificial intelligence search in solving the camera placement problem

84 87 87 89 92 93 95 95 98 99 104 106 107 107


Altahir A. Altahir, Vijanth S. Asirvadam, Nor Hisham B. Hamid and Patrick Sebastian Nomenclature 7.1 Introduction 7.1.1 The roles of visual surveillance systems 7.1.2 The camera placement problem from an artificial intelligence perspective 7.1.3 Chapter description 7.2 Background 7.3 Modeling the visual sensors 7.3.1 The sensor space modeling 7.3.2 The camera coverage modeling 7.3.3 The analysis of camera visibility 7.4 Solving the camera placement problem using artificial intelligence search 7.4.1 Generate and test algorithm 7.4.2 Uninformed search 7.4.3 Hill climbing strategy 7.5 Further discussion 7.5.1 The efficiency of the algorithms 7.5.2 The performance of the algorithms 7.6 Conclusion References

109 109 110

111 112 112 113 114 114 115 116 117 118 119 121 122 123 123 124


8. Nanotechnology and applications


Kanika Dulta, Amanpreet Kaur Virk, Parveen Chauhan, Paras Bohara and Pankaj Kumar Chauhan 8.1 Introduction 8.2 Nanoscience and nanotechnology 8.3 Computational nanotechnology 8.3.1 Molecular modeling 8.3.2 Nanodevice simulation 8.3.3 Nanoinformatics 8.3.4 High-performance computing 8.3.5 Computational intelligence 8.4 Applications of computational nanotechnology 8.4.1 Nanotube-based sensors and actuators 8.4.2 Nanoinformatics for drugs 8.4.3 Molecular docking 8.4.4 Nanotoxicology 8.4.5 Other applications 8.5 Conclusion References

9. Advances of nanotechnology in plant development and crop protection

129 130 130 131 133 133 135 135 137 138 138 138 138 139 139 139


Rokeya Akter, Md. Habibur Rahman, Md. Arifur Rahman Chowdhury, Manirujjaman Manirujjaman and Shimaa E. Elshenawy 9.1 Introduction 9.2 Agriculture’s nanofarming: a modern frontier 9.3 Synthesis of green nanoparticles and its sources 9.4 Good distribution possibilities allowed by nanoparticles: a modern sustainable agriculture portal 9.5 Nanofertilizers: a good food supply for crops 9.6 Germination, field production, and efficiency enhancement of seed nanomaterials 9.7 Plant sensory systems and responses to radical climate change influences nanomaterials 9.8 Nanosensors and nanomaterials: perturbation detection and control 9.9 Pesticide-based plant safety nanomaterials


9.10 Nanotechnology in pesticides and fertilizers 9.11 Control of plant pests 9.12 Concluding remarks Consent for publication Conflict of interest References

10. A methodology for designing knowledge-based systems and applications

10.1 Introduction 10.2 Related work 10.3 Design the knowledge-based system 10.3.1 The architecture of a knowledge-based system 10.3.2 The process for designing the knowledge-based system 10.4 Knowledge base and inference engine of a knowledge-based system 10.4.1 Design the knowledge base 10.4.2 Design the Inference engine 10.5 Applications 10.5.1 Design an intelligent problem solver for solving solid geometry at high school 10.6 Conclusion and Future work References

11. IoT in healthcare ecosystem Poonam Gupta and Indhra Om Prabha M


11.1 Introduction 11.2 Applications of Internet of Things in healthcare 11.2.1 Patient-centric IoT 11.2.2 Hospital-centric IoT applications 11.2.3 IoT benefitting health insurance companies 11.2.4 Pharmaceutical governance 11.3 Implementation methodologies 11.3.1 Fog computing 11.3.2 Edge computing 11.4 Implementation models 11.4.1 Heart disease prediction 11.4.2 Healthcare IoT-based affective state mining using deep convolutional neural networks



149 150 150

151 152 152 152 153 153


Hien D. Nguyen, Nhon V. Do and Vuong T. Pham




159 160 160 160 162 163 163 164 168

168 183 184

187 187 188 188 189 190 190 190 191 192 193 193




11.5 Challenges in healthcare IoT 11.5.1 Technology-oriented challenges 11.5.2 Adapting to remote healthcare and telehealth 11.5.3 Data security 11.6 Security issues and defense mechanisms and IoT 11.6.1 Security requirements in healthcare IoT 11.6.2 Attacks on IoT devices 11.6.3 Defensive mechanism 11.7 Covid 19—how IoT rose to the global pandemic 11.7.1 About Covid 19 11.7.2 Decoding the outbreak and identifying patient zero

195 195 195 195 196 196 196 197 198 199

11.7.3 11.7.4 11.7.5 11.7.6 11.7.7

Quarantined patient care Public surveillance Safeguarding hygiene IoT and robotics Smart disinfection and sanitation tunnel 11.7.8 Smart masks and smart medical equipment 11.8 Future of IoT in healthcare 11.8.1 IoT and 5G 11.8.2 IoT and artificial intelligence 11.9 Conclusion References Index


199 200 200 200 201 201 202 202 202 203 203 205

List of contributors T. Lucia Agnes Beena Department of Information Technology, St. Josephs College, Tiruchirappalli, India

Nor Hisham B. Hamid Department of Electrical and Electronics Engineering, Universiti Technologi PETRONAS, Perak, Malaysia

Rokeya Akter Department of Pharmacy, Jagannath University, Dhaka, Bangladesh

Joarder Kamruzzaman School of Engineering and Information Technology, Federation University Australia, Churchill, VIC, Australia

Altahir A. Altahir Department of Electrical and Electronics Engineering, Universiti Technologi PETRONAS, Perak, Malaysia Vijanth S. Asirvadam Department of Electrical and Electronics Engineering, Universiti Technologi PETRONAS, Perak, Malaysia Sakshi Bhargava Department of Physical Sciences and Engineering, Banasthali Vidyapith, Tonk, India Debnath Bhattacharyya Computer Science and Engineering Department, Koneru Lakshmaiah Education Foundation, Guntur, India Paras Bohara Faculty of Applied Sciences and Biotechnology, Shoolini University of Biotechnology and Management Sciences, Solan, India

Tai-Hoon Kim Computer Science and Engineering Department, Global Campus of Konkuk University, Chungcheongbuk-do, Korea Deepak Kumar Sharma Department of Information Technology, Netaji Subhas University of Technology (formerly known as Netaji Subhas Institute of Technology), New Delhi, India Indhra Om Prabha M G H Raisoni College of Engineering and Management, Pune, India Bijoy Kumar Mandal Computer Science and Engineering Department, NSHM Knowledge Campus, Durgapur, India

Pankaj Kumar Chauhan Faculty of Applied Sciences and Biotechnology, Shoolini University of Biotechnology and Management Sciences, Solan, India

Manirujjaman Manirujjaman Institute of Health and Biomedical Innovation (IHBI), School of Clinical Sciences, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia

Parveen Chauhan Faculty of Applied Sciences and Biotechnology, Shoolini University of Biotechnology and Management Sciences, Solan, India

Asif Ikbal Mondal Computer Science and Engineering Department, Dumkal Institute of Engineering & Technology, Murshidabad, India

Md. Arifur Rahman Chowdhury Department of Pharmacy, Jagannath University, Dhaka, Bangladesh; Department of Bioactive Materials Science, Jeonbuk National University, Jeoju, South Korea

Pratyusa Mukherjee School of Computer Engineering, KIIT Deemed to be University, Bhubaneshwar, India

Nhon V. Do Hong Bang International University, Ho Chi Minh City, Vietnam

Antoanela Naaji Faculty of Economics, Computer Science and Engineering, “Vasile Goldis” Western University of Arad, Arad, Romania

Kanika Dulta Faculty of Applied Sciences and Biotechnology, Shoolini University of Biotechnology and Management Sciences, Solan, India

Hien D. Nguyen University of Information Technology, Ho Chi Minh City, Vietnam; Vietnam National University, Ho Chi Minh City, Vietnam

Shimaa E. Elshenawy Center of Stem Cell and Regenerative Medicine, Zewail City for Science, Zewail, Egypt

Vuong T. Pham Sai Gon University, Ho Chi Minh City, Vietnam

Poonam Gupta G H Raisoni College of Engineering and Management, Pune, India

Prajoy Podder Bangladesh University of Engineering and Technology, Institute of Information and Communication Technology, Dhaka, Bangladesh ix


List of contributors


Poongodi School of Computing Science and Engineering, Galgotias University, Greater Noida, India

Patrick Sebastian Department of Electrical and Electronics Engineering, Universiti Technologi PETRONAS, Perak, Malaysia

Marius Popescu Faculty of Economics, Computer Science and Engineering, “Vasile Goldis” Western University of Arad, Arad, Romania

D. Sumathi SCOPE, VIT-AP University, Amaravati, India

Chittaranjan Pradhan School of Engineering, KIIT Deemed to be Bhubaneshwar, India

Computer University,

Md. Habibur Rahman Department of Southeast University, Dhaka, Bangladesh


M. Rubaiyat Hossain Mondal Bangladesh University of Engineering and Technology, Institute of Information and Communication Technology, Dhaka, Bangladesh

P. Suresh School of Mechanical Engineering, Galgotias University, Greater Noida, India Jayesh S Vasudeva Department of Instrumentation and Control Engineering, Netaji Subash University of Technology (formerly known as Netaji Subhas Institute of Technology), New Delhi, India Amanpreet Kaur Virk Faculty of Applied Sciences and Biotechnology, Shoolini University of Biotechnology and Management Sciences, Solan, India

Chapter 1

Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern Prajoy Podder1, M. Rubaiyat Hossain Mondal1 and Joarder Kamruzzaman2 1

Bangladesh University of Engineering and Technology, Institute of Information and Communication Technology, Dhaka, Bangladesh, 2School of

Engineering and Information Technology, Federation University Australia, Churchill, VIC, Australia

Abbreviations ^ yÞ iðx; Iðu; vÞ Gðx; yÞ FðU; VÞ W σx andσy xc yc r gc gp   ψ" sp LBPp


2D cepstrum with ðx; yÞ representing quefrency coordinates 2D discrete-time Fourier Transform 2D Gabor function 2D discrete cosine transform (DCT) coefficient matrix Angular frequency Standard deviations of x and y The x-axis coordinate of the iris circle The y-axis coordinate of the iris circle Radius of the iris circle Gray level of the center pixel, c Gray level of the neighboring pixel, p Binary iris code obtained as XOR output MLBP operator


The concern of high security and surveillance in the present world has made the identification of people an increasingly important issue. Among various identification modes, biometric has been considered over the last few decades for its reliable and accurate identification [15]. Commonly used biometric features include the face, fingerprint, iris, retina, hand geometry, and DNA identifications. Among them, nowadays, iris recognition has attracted significant interest in research and commercialization [615]. Iris recognition has several applications in the security systems of banks, border control, restricted areas, etc. [13]. One key part of such a system is the extraction of prominent texture information or features in the iris. This feature extraction method generates feature vectors or feature codes. The feature vectors of the unknown images are used to match those of the stored known ones. In an iris recognition system, the matching process matches the extracted feature code of a given image with the feature codes previously stored in the database. In this way, the identity of the given iris image can be known. A generalized iris recognition scheme is presented in Fig. 1.1. There are two major parts of Fig. 1.1, one showing the feature extraction and the other describing the identification portion of an iris. The system starts with image acquisition and ends with matching, that is, the decision of acceptance or rejection of the identity. In between, there are two main stages: iris image preprocessing and feature extraction [3,4]. Furthermore, iris image preprocessing includes the stages of iris segmentation, normalization, and enhancement [5,11]. In the acquisition stage, cameras are used to capture images of the iris. The acquired images are then segmented. In iris segmentation, the inner and the outer boundaries are Applications of Computational Intelligence in Multi-Disciplinary Research. DOI: © 2022 Elsevier Inc. All rights reserved.



Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 1.1 Example of a typical iris recognition system: (A) process of feature extraction from an iris image; (B) identification of an iris.

detected to separate the iris from the pupil and sclera. A circular edge detection method is used to segment the iris region by finding the pixels of the image that have sharp intensity differences with neighboring pixels [3]. Estimating the center and the radius of each of the inner and outer circles refers to iris localization. After iris segmentation, any image artifacts are suppressed. Next is the normalization step in which the images are transformed from Cartesian to pseudo polar scheme. This is shown in Fig. 1.1, where boundary points are aligned at an angle. Image enhancement is then performed. As a part of feature extraction, the important features are extracted and then used to generate an iris code or template. Finally, iris recognition is performed by calculating the difference between codes with the use of a matching algorithm. For this purpose, Hamming and Euclidian are well known and also considered in this chapter [15]. The matching score is compared with a threshold to determine whether the given iris is authentic or not. Despite significant research results so far [39,11,12,14], there are several challenges in iris recognition [13,1526]. One problem is the occlusion, that is, the hiding of the iris caused by eyelashes, eyelids, specular reflection, and shadows [21]. Occlusion can introduce irrelevant parts and hide useful iris texture [21]. The movement of the eye can also cause problems in iris region segmentation and thus accurate recognition. Another issue is the computation time of iris identification. For large population sizes, the matching time of the iris can sometimes become exceedingly high for real-time applications, and the identification delay increases with the increase in the population size and the length of feature codes. It has been reported in the recent literature [13,18,22] that the existing iris recognition methods still suffer from long run times apart from other factors. This is particularly true when the sample size is very large, and the iris images are nonideal and captured from different types of cameras. Hence, devising a method that reduces the run time of iris recognition without compromising accuracy is still an important research problem. The identification delay can be reduced by reducing the feature vector of iris images. Thus this chapter focuses on the issue of reducing the feature vector which will lead to a reduction in identification delay without lowering the identification accuracy. For lowering the feature vector, the concept of Haar wavelet along with modified local binary pattern (MLBP) is used in this work. Note that in the context of face recognition [2730] and fingerprint identification [31], the Haar wavelet transform demonstrates an excellent recognition rate at a low computation time. In Ref. [32], the Haar wavelet is also proposed without the use of MLBP. The main contributions of this chapter can be summarized as follows. 1. A new iris feature extraction method is proposed. This new method is based on repeated Haar wavelet transformation (HWT) and MLBP. Note that MLBP is the local binary pattern (LBP) operation followed by Exclusive OR (XOR). This proposed method is different from the technique described in Ref. [30], which uses single-level HWT and LBP (without XOR) in the context of face recognition.

Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern Chapter | 1


2. The efficacy of the HWTMLBP method is evaluated using three well-known benchmark datasets: CASIA-Iris-V4 [33], CASIA-Iris-V1 [34], and MMU iris database [35]. 3. A comparison is made of this new technique with the existing methods of feature extraction in terms of feature vector length, false acceptance rate (FAR), and false rejection rate (FRR). It is shown here that the proposed method outperforms the existing ones in terms of feature vector length. The remainder of this chapter is organized as follows. Section 1.2 provides a literature survey of the relevant research. Section 1.3 shows the iris localization part where the inner boundary and outer boundary can be detected. Section 1.4 describes iris normalization. Section 1.5 illustrates our proposed approach for the purpose of encoding the iris features. Section 1.6 describes the iris recognition process by matching score. The effectiveness of the new method is evaluated in Section 1.7. Finally, Section 1.8 provides a summary of the research work followed by the challenges and future work.


Related works

A number of research papers describe iris feature extraction techniques, which are discussed in the following. Ma et al. [3] applied a bank of spatial filters to acquire local details of the iris. These spatial filters generate discriminating texture features for an iris image based on the characteristics of the iris. Ma et al. [4] considered a bank of circular symmetric filters for iris feature extraction. These filters [4] are modulated by a circular symmetric sinusoidal function, which is different from the Gabor filter modulated by an orientated sinusoidal function. Monro et al. [5] used discrete cosine transform (DCT) for iris recognition. Daugman [6] introduced the idea of using a 2D Gabor wavelet filter for extracting features of an iris image. Furthermore, Masek et al. [9] used 1D and 2D Log-Gabor filters for feature extraction. Li et al. [8] used a convolutional neural network (CNN) algorithm, which is a form of deep learning, to extract iris features. Umer et al. [12] used a novel texture code defined over a small region at each pixel. This texture code was developed with vector ordering based on the principal component of the texture vector space. Soliman et al. [11] considered feature extraction using the Gabor filter, where the original Gabor features were masked via a random projection scheme. The masking was performed to increase the level of security. In this scheme, the effects of eyelids and eyelashes were removed. An iris feature extraction method using wavelet-based 2D mel-cepstrum was proposed in Ref. [14], where the cepstrum of a signal is the inverse Fourier transform of the logarithm of the estimated signal spectrum. The 2D cepstrum of an image can be defined by the following expression:   ^ yÞ 5 IDFTðlogðI ðu; vÞ2 ÞÞ; iðx; ^ yÞ is the 2D cepstrum with ðx; yÞ representing quefrency coordinates, IDFT represents the inverse discrete where iðx; Fourier transform, and Iðu; vÞ is the 2D discrete-time Fourier Transform of the image. This scheme applied the CohenDaubechiesFeauveau 9/7 filter bank for extracting features. In wavelet cepstrum, nonuniform weights are assigned to the frequency bins. In this way, the high-frequency components of the iris image are emphasized, resulting in greater recognition reliability. Furthermore, this wavelet cepstrum method helps to reduce the feature set. Barpanda et al. [15] used a tunable filter bank to extract region-based iris features. These filters were used for recognizing noncooperative images instead of high-quality images collected in cooperative scenarios. The filters in this filter bank were based on the halfband polynomial of 14th order where the filter coefficients were extracted from the polynomial domain. To apply the filter bank, the iris template was divided into six equispaced parts and the features were extracted from all the parts except the second one, which mainly contains artifacts. Betancourt et al. [13] proposed a robust key pointsbased feature extraction method. To identify distinctive key points, three detectors, namely HarrisLaplace, HessianLaplace, and Fast-Hessian detectors, were used. This method is suitable for iris recognition under variable image quality conditions. For iris feature extraction, Sahua et al. in [22] used phase intensive local pattern (PILP), which consists of densitybased spatial clustering and key-point reduction. This technique groups some closely placed key points into a single key point, leading to high-speed matching. Jamaludin et al. [18] used a 1D Log-Gabor filter and considered the subiris region for feature extraction. This filter has a symmetrical frequency response on the log axis. In this case, only the lower iris regions that are free from noise, as well as occlusions, are considered. In Ref. [17], combined discrete wavelet transform (DWT) and DCT were used for the extraction of iris features. Firstly, DWT was performed where the output of this stage was in the spatial domain. Next, DCT was performed to transform the spatial domain signal to the frequency domain and to obtain better discriminatory features. Another feature extraction method is the discrete dyadic wavelet transform reported in Ref. [16]. In dyadic wavelet transform, the


Applications of Computational Intelligence in Multi-Disciplinary Research

decomposition at each level is done in a way that the bandwidth of the output signal is half of that of the input. In Ref. [26], a PILP technique is used for feature extraction and to obtain a feature vector of size 1 3 128. In this PILP method, there are four stages: key-point detection via phase-intensive patterns, removal of edge features, computation of oriented histogram, and formation of the feature vector. Iris features were extracted using 1D DCT and relational measure (RM), where RM encodes the difference in intensity levels of local regions of iris images [21]. The matching scores of these two approaches were fused using a weighted average. The score-level fusion technique compensates for some images that are rejected by one method but accepted by the other [21]. Another way of extracting feature vectors from iris images is by the use of linear predictive coding coefficients (LPCC) and linear discriminant analysis (LDA) [24]. Llano et al. in [19] used a 2D Gabor filter for feature extraction. Before applying this filter, the fusion of three different algorithms was performed at the segmentation level (FSL) of the iris images to improve the textual information of the images. Oktiana et al. [36] proposed an iris feature extraction system using an integration of Gradientface-based normalization (GRF), where GRF uses an image gradient to remove the variation in the illumination level. Furthermore, the work in Ref. [19] concatenated the GRF with a Gabor filter, a difference of Gaussian (DoG) filter, binary statistical image feature (BSIF), and LBP for iris feature extraction in a cross-spectral system. Shuai et al. proposed [37] an iris feature extraction method based on multiple-source feature fusion performed by a Gaussian smoothing filter and texture histogram equalization. Besides, there have been some recent studies in the field of iris recognition [3849], with some focusing on iris feature extraction methods [38,4042,45,49] and some on iris recognition tasks [39,44,46,48]. The 2D Gabor function can be described mathematically by using the following expression:       1 1 x2 y2 1 Gðx; yÞ 5 exp 2 1 2 3 π 3 j 3 Wx 2 3 π 3 σx 3 σy 2 σx 2 σy 2 and 2D DCT can be defined as: pffiffiffi M21 N21     2XX ð2Y 1 1ÞUπ ð2X 1 1ÞVπ F ðU; V Þ 5 f ð X; Y Þcos cos M X50 Y50 2M 2M where f(X,Y) is the image space matrix; (X, Y) is the position of the current image pixel and FðU; V ÞðU; V 5 1; 2; . . . ::; M 2 1Þ is the transform coefficient matrix; W is the angular frequency; and σx and σy are the standard deviations of x and y, respectively. The concepts of machine learning (ML)-driven methods for example, neural networks and genetic algorithms have been reported [46], while the idea of deep CNNs has also been applied [40]. Moreover, researchers are now investigating the effectiveness of multimodal biometric recognition systems [43,47]. A comparative summary of some of the most relevant works on iris feature extraction is shown in Table 1.1. It can be seen that there are several algorithms and these are applied to different datasets, achieving varying performance results.


Iris localization

This section discusses the iris localization step that employs circular Hough transformation, which is capable of properly detecting circles in the images. Hough transform searches for a triplet of parameters (xc ; yc ; r) determining (xi ; yi ), where xc ; yc , and r represent the x-axis coordinate, y-axis coordinate, and the radius of the iris circle, respectively. In this case, (xi ; yi ) represents the coordinates of any of the i points on the circle. With this consideration, the Hough transform can be defined as follows. H ðxc ; yc ; r Þ 5

n X

hðxi ; yi ; xc ; yc ; rÞ



In this regard, edge points are detected first. For each of the edge points, a circle is drawn having the center in the middle of the edge. In this way, each of the edge points constitutes circles with the desired radius. Next, an accumulator matrix is formed to track the intersection points of the circles in the Hough space, where the accumulator has the number of circles. The largest number in the Hough space points to the center of the image circles. Several circular filters with different radius values are considered and the best one is selected.

Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern Chapter | 1

TABLE 1.1 Summary of literature review. References

Adopted technique in reference




Spatial filters constructed based on observations

Extraction of features is performed only in the upper portion of the normalized iris region as it provides useful texture information. The feature vector length is large, being of size 1 3 1536.



Circular symmetric filters

About 75% of the top-most of the unwrapped iris images are used for texture information. The variation of the texture of the iris in the local region is not focused on in this paper.


Patch-coding technique for extracting features from normalized iris. The features are derived by using fast Fourier transformation.

The method has low complexity with a high amount of accuracy. Additionally, the dimensionality of the feature vector is 1 3 2343. However, nonideal images are not considered.


2D Gabor filter

The dimensionality of the feature vector is 1 3 2048.


1D and 2D Log-Gabor filters

This method cannot produce features of different frequencies, and the size of the iris template is 1 3 4800.



Deep learning

CNN as deep learning is used to extract iris features, and the features are then used for image encryption.



Texture code cooccurrence matrix

Feature vector size of 1 3 400. The method uses only an effective portion of the iris images to avoid the occlusion part caused by not only eyelashes but also eyelids.

UPOL, CASIA-Iris-V3Interval, MMU1, and IITD


1D Gabor filter where Gabor features are masked

Masks the original Gabor features for improving security while excluding the effects of eyelids and eyelashes. Moreover, only the upper half of the normalized iris portion is considered.



2D kernel and hybrid MLPNNPSO algorithm

Feature extraction is performed on a small sample of 140 images at an accuracy rate of 95.36%. In this case, 1000 iterations are performed, which leads to high computational time.



2D wavelet cepstrum technique for feature extraction

False acceptance rate is 10.45%; recognition accuracy is 89.93%.



Tunable filter bank based on halfband polynomial of 14th order

False acceptance rate is 8.45%; recognition accuracy is 91.65%.



Key pointsbased feature extraction method

Considers only salient key points in the whole region. The feature extraction stage is time consuming.

CASIA-Iris-V4-Interval, MMU 2, UBIRIS 1


Low-density parity check and SHA-512

Comparatively high false rejection rate.


Density-based spatial clustering and keypoint reduction to be applied on PILP

For feature extraction and feature vector reduction, postprocessing is required, leading to additional time consumption.



Subiris technique

Does not extract features of the unoccluded upper part of the iris region.



(Continued )



Applications of Computational Intelligence in Multi-Disciplinary Research

TABLE 1.1 (Continued) References

Adopted technique in reference




DWT and DCT combination for feature extraction

Provides good performance only for lowcontrast images. A recognition rate of 88.5% was achieved on the Phoenix database.

Phoenix and IITD iris database


Discrete dyadic wavelet transform

Iris images of only 10 people are used and a feature vector with a size of 1 3 256 is achieved. Results need to be validated with a higher number of subjects.


Local feature based on phase-intensive patterns

Feature extraction is based on key point detection via phase-intensive patterns. Obtains a feature vector of size 1 3 128.



DCT and RM

Based on the dissimilarity score of DCT and RM and using the Hamming distance metric, the matching of images is performed. This is used to compensate for images rejected by either DCT or RM but accepted by the other.

CASIA-Iris-V4 Interval, Lamp, and selfcollected IITK



The method has high complexity, and in the case of LPCC, results in a feature vector with a size of 1 3 546.



Textural information development and exploration

This method has three stages: quality evaluation, automatic segmentation, and fusion at the segmentation level. This method rejects images that have low quality. The obtained feature vector is of size 1 3 2048.

MBGC-V2, CASIA-IrisV3, CASIA-Iris-V4, and UBIRIS v1 (for iris image)


Gabor filter, a DoG filter, BSIF, and LBP

The feature extraction is done using the fusion of GRF with a Gabor filter, a DoG filter, a BSIF, and LBP. Hamming distance is used for matching purposes.

Hong Kong Polytechnic University Cross-Spectral Iris Images Database


Convolutional neural network

Feature extraction is done using the concept of feature fusion, which is achieved by using a Gaussian filter and a texture histogram equalizer.

JLU iris library


Iris normalization

This section describes the iris normalization step. The size of different acquired iris images will vary because of the variation in the distance from the camera, angle of image capturing, illumination level, etc. For the purpose of extracting image features, the iris image is to be segmented and the resultant segments must not be sensitive to the orientation, size, and position of the patterns. For this, after segmentation, the resultant element is transformed to Cartesian. In other words, the circular iris image is transformed into a fixed dimension. Fig. 1.2 illustrates the normalization of iris images from three datasets. For each of the datasets, the one original input image is shown, followed by its inner and outer boundary detection, and then its segmented version, and finally its normalized version. Fig. 1.2A describes Daugman’s rubber sheet model for iris recognition. Three original images from three datasets are shown in Fig. 1.2B, F, and J. First of all, Fig. 1.2B is one original image from the CASIA-IrisV4 dataset [33]. For the iris image in Fig. 1.2BE represent the corresponding inner and outer boundaries, the segmented version, and the normalized version, respectively. Secondly, Fig. 1.2F is one original image from the CASIA-Iris-V1 dataset [34]. For the iris image in Fig. 1.2FI represent the corresponding inner and outer boundaries, the segmented version, and the normalized version, respectively. Thirdly, Fig. 1.2J is one original image from the MMU iris database [35], and Fig. 1.2KM represent the corresponding inner and outer boundaries, the segmented version, and the normalized version, respectively.

Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern Chapter | 1


FIGURE 1.2 Illustrations of (A) Daugman’s rubber sheet model; (B, F, J) original input images; (C, G, K) images with inner and outer boundary detection; (D, H, L) segmented iris regions, and (E, I, M) iris images after normalization.


The proposed feature extraction scheme

This section describes the proposed iris feature extraction method. Fig. 1.3 represents the block diagram of the proposed three-level HWT and MLBP. The decomposition of the image three times by HWT results in the reduction in feature size without significant loss in the image quality or important attributes. The use of MLBP further reduces the feature vector size without loss in image attributes. Fig. 1.4 shows the three-level HWT. It can be seen from the figure that at each level of HWT, the input image is divided into four output images. These output images are denoted as horizontal detail (HL), vertical detail (VL), diagonal detail (HH), and approximation (LL) images. The LL subimage, also known as the LL subband, contains significant information about the original image. In other words, the LL subband is a coarse approximation of an image and it does not contain high-frequency information. Next, the three-level HWT algorithm is discussed. Algorithm 1: HWT Input: Normalized iris image


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 1.3 Block diagram of the proposed approach for iris feature extraction.

FIGURE 1.4 Three-level HWT.

Output: Approximation part of level three Main Process: Step 1: Apply first-level HWT to the normalized iris image to generate its wavelet coefficients. Step 2: Apply second-level HWT on the approximation part obtained from Step 1 to generate its wavelet coefficients. Step 3: Apply third-level HWT on the approximation part obtained from Step 2 to generate its wavelet coefficients. Step 4: Get the level three approximation part obtained from Step 3. The main idea of using HWT is that wavelet decomposition can transform a detailed image into approximation images. The approximation parts contain a major portion of the energy of the images. The HWT is repeatedly executed to shrink the information size. The results of the three-level decomposition produce a reduced characteristics region having little loss. This is shown in Fig. 1.5. It can be noted that most of the information of the iris image is contained in the extracted LL (low-frequency) region on the multidivided iris image as indicated by Fig. 1.5. The other regions have less information as indicated by their low intensity (dark) levels. Fig. 1.6 illustrates the size of each level for the three-level HWT. The application of level 1 HWT to the normalized image of size 64 3 512 results in wavelet

Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern Chapter | 1


FIGURE 1.5 Three-level wavelet decomposition of normalized iris.

FIGURE 1.6 Three-level HWT with the size of each level.

coefficients of LL1, LH1, HL1, and HH1. In this case, the approximation part of level 1, denoted as LL1, becomes of size 32 3 256. Next, level 2 HWT is applied to LL1, which generates wavelet coefficients of LL2, LH2, HL2, and HH2. In this case, the approximation part of level 2 (LL2) becomes of size 16 3 128. After that, level 3 HWT is applied to LL2 to generate its wavelet coefficients LL3, LH3, HL3, and HH3. In this case, the approximation part of level 3 (LL3) becomes of size 8 3 64. Hence a major distinctive region LL3 is obtained by performing the wavelet transformation three times. Next, the LL3 region is used for the MLBP tasks. Now consider the MLBP operation [25], which generates robust binary features. Furthermore, MLBP has low computational complexity. MLBP labels each pixel based on the neighboring pixels and considering a given threshold. MLBP then produces outputs in the binary format. This binary code can describe the local texture pattern. Note that MLBP is an LBP followed by an XOR operation. Next, MLBP operation is described in the following. For a center pixel c, and neighboring pixels p within a neighborhood of P pixels, the MLBP operation can be expressed as follows. LBPp 5

P21  X  S gp 2 g c 3 2p



where LBPp is the MLBP operator, gc is the gray level of c, and gp is the gray level of p pixels. Moreover, SðxÞ in (4) refers to the sign function defined as,

1 ifx $ 0 Sð x Þ 5 (1.5) 0 otherwise


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 1.7 Center element of a 3 3 3 pixel image.

FIGURE 1.8 MLBP operation of a 3 3 3 subregion: (A) the neighborhood of a pixel within the image, (B) the threshold version of the neighborhood, and (C) the MLBP pattern where the middle pixel has been computed.

Next, the center pixel value is generated by applying XOR operation on the values of LBPp . This results in the following expression. ψ" ðsp Þ 5 so ". . ."sP-1 (1.6)   where " denotes the XOR operator and ψ" sp is the binary iris code obtained as the XOR output. Since it is a commutative operation of XOR, this can be performed by circularly shifting on sp in the clockwise or anticlockwise direction. Now XOR is performed to reduce the size from 8 3 64 to 1 3 64. XOR is computed in the column vector. In other words, the eight-row iris signature is reduced to only a single row. Figs. 1.7 and 1.8 describe the MLBP operation. Fig. 1.7 shows the center pixel in a 3 3 3 neighborhood, while Fig. 1.8 illustrates the computation of LBP8;1 with XOR for a single pixel. Algorithm 2: Feature encoding using the proposed MLBP Input: Level three approximation part of the normalized image Output: Binary sequence of the normalized iris image. Main Process: Step 1: Read the intensity values of the level three approximation part of the normalized image. Step 2: Convert the RGB image to grayscale form. Step 3: Resize the image if required and then store the size ½M; N of the image. Step 4: Divide the image into eight segments. Step 5: For each of the image segments, apply a 3 3 3 kernel. Step 6: For i 5 1: P // P 5 8 for a 3 3 3 kernel. Step 7: Compute DðiÞ 5 gp ðiÞ 2 gc ðiÞ // gp is the gray level for neighboring pixels and gc is the center pixel. Step 8: If DðiÞ , 0 setUSðiÞ 5 0 else SðiÞ 5 1 end. Step 9: Compute LBP_p 5 XOR(SðiÞ); // Apply XOR operation to get the binary mask. Step 10: Place the binary output of the XOR operation in the center pixel. Step 11: Move the kernel in order to obtain a binary template. Step 12: Apply XOR operation across the columns. So, for the case of MLBP, the first LBP operation extracts the distinctive features to generate a unique iris code. This code is reduced from 8 3 64 features to 1 3 64 by applying the XOR operation.

Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern Chapter | 1



Matching results

This section provides results on iris recognition using the matching process of iris codes. In order to find the similarity or to measure the closeness of an unknown iris code with a template iris code, the distance between these two is calculated. The distance is a method of defining the degree of matching between two iris codes. For this, Euclidean and Hamming distances are considered. The Euclidean distance is calculated as follows. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (1.7) ED 5 ðX2 2X1 Þ2 1 ðY2 2Y1 Þ2 where ED is the Euclidean distance between two coordinate points: (X1,Y1) and (X2,Y2). In the case of Hamming distance between two iris codes, the number of unmatched bits is divided by the number of bits used for comparison. The main operation in the Hamming distance is the use of an XOR gate which computes the disagreement between two input bits. If P and Q are two bitwise templates of iris images and N is the number of bits of each iris code, then the Hamming distance can be mathematically expressed as follows. HD 5

N 1X Pj ðXORÞQj N j51


where HD denotes the Hamming distance. Hence according to (6), HD 5 0 indicates complete similarity between two iris codes, while HD 5 1 means total dissimilarity between the codes. In practice, the two iris codes are assumed to be the same, if the Hamming distance is lower than a threshold. Similar to the work in Ref. [6], this chapter considers a Hamming distance value of 0.32 for iris templates to be identical.


Performance evaluation

This section discusses the experimental results of the proposed method. For the experimentation, images are obtained from three different datasets [3335]. Figs. 1.91.11 correspond to images from the CASIA-IRIS-V4 [33], CASIAIRIS-V1 [34], and MMU [35] datasets, respectively. The datasets are described in Table 1.2. The CASIA-IRIS-V4 dataset consists of 2639 images of 249 subjects/persons. On the other hand, the CASIA-IRIS-V1 dataset has 756 iris images from 108 eyes of 54 subjects, while the MMU dataset consists of 450 images of 45 subjects. Firstly, one original iris image from Ref. [33] is illustrated in Fig. 1.9A, whereas Fig. 1.9B is the corresponding template after the application of LBP to LL3 of size 8 3 64. For clarity, Fig. 1.9C illustrates a larger view of the final template shown in Fig. 1.9B. The template is further reduced to a size of 1 3 64 by applying XOR operation through column vectors. Secondly, Fig. 1.10AC illustrate another original iris image from Ref. [34], its corresponding template after applying LBP, and a larger view of the template, respectively. Thirdly, Fig. 1.11AC illustrate the same for an original iris image from Ref. [35]. The performance of the proposed method is evaluated for the three datasets mentioned above. For each dataset, 90% (rounded up to the next integer) of the images are considered for training while the remaining are considered for testing. Table 1.3 presents the successful recognition rate of the proposed method using Hamming distance and Euclidean distance. The results are obtained only for the testing iris images. It can be seen from Table 1.3 that for the CASIA-IRISV1 dataset, the proposed algorithm obtains an average correct recognition rate of 98.30% and 97.60% for the case of Hamming distance and Euclidean distance, respectively. The recognition rates for the other two datasets are slightly lower, as shown in Table 1.3. Next, the proposed new method is compared with the existing techniques reported in the literature [3,4,6,16,24]. Table 1.4 presents the comparative results of the proposed method with the previous ones. For the proposed method, FIGURE 1.9 (A) An original iris image from the CASIA-IRIS-V4 dataset [33], (B) the final generated iris template, and (C) a larger view of the binarized template.


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 1.10 (A) An original iris image from the CASIA-IRISV1 dataset [34], (B) the final generated iris template, and (C) a larger view of the binarized template.

FIGURE 1.11 (A) An original iris image from the MMU dataset [35], (B) the final generated iris template, and (C) a larger view of the binarized template.

TABLE 1.2 Description of the datasets used in this work. Dataset




Light wavelength

CASIA-IRIS-V4 (CASIA-Iris-Interval) [33]



CASIA close-up iris camera





CASIA close-up iris camera


MMU (MMU 1) [35]



LG EOU 2200


TABLE 1.3 Comparison of accuracy of the proposed method. Dataset

Success rate Hamming distance

Euclidean distance










the best results that are obtained with the Hamming distance method using the CASIA-IRIS-V1 dataset are taken into consideration. In this case, the threshold value is set for computing FAR, which is the rate at which a biometric security system incorrectly accepts an unauthorized user, and FRR, which is the rate at which the system incorrectly rejects an authorized user. In other words, the FAR is the ratio of the number of false acceptances to the number of imposter verification attempts, whereas FRR is the ratio of the number of false rejections (NFR) to the number of enrollee verification attempts. This proposed method and the works in Refs. [3,24] use the same CASIA-IRIS-V1 dataset having BMP images with a resolution of 320 3 280. From Table 1.4, it can be seen that the proposed method has a FAR of 0.003% and FRR of 0.80% and an average accuracy of 98.30%. The feature vector length and so the computation time of the proposed scheme is significantly lower than existing methods reported in Refs. [3,4,6,16,24]. Among the research works listed in Table 1.4, the proposed method has the second best (lowest) FAR percentage, while the work in Ref. [24] reports having a FAR of 0%, but its generated feature vector length is over 8.5 times that of our proposed one. The

Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern Chapter | 1


TABLE 1.4 Comparison of results with the existing methods. References

Feature vector length

FAR (in %)

FRR (in %)

Avg. accuracy (in %)


Image resolution


1 3 1536





BMP format with resolution 320 3 280


1 3 384




CASIA and Modified database

Not reported


1 3 2048




Ophthalmology Associates of Connecticut

Images in RS170, VHS (NTSC), and SVHS (NTSC) formats were digitized by 480 3 640 monochrome 8-bit/ pixel.


1 3 256




A database of both eyes of 10 people and, at least, 10 photos of each eye.

Not reported


1 3 546





BMP format with resolution 320 3 280

Proposed method (Hamming distance)

1 3 64





BMP format with resolution 320 3 280

extremely low FAR (0.003%) attained by our method indicates its strong security capability by not allowing access to imposter iris. As a comparison to the method proposed in Ref. [3], which is a highly cited work in this domain, our method outperforms in all three performance metrics with a highly reduced feature vector length (1/24th of the former), making our method highly suitable for real-time person identification. The method in Ref. [16] that produces a feature vector length closest to our method (256-bit vs 64-bit) shows degraded performance in all three performance metrics. The average accuracy of the proposed scheme is slightly lower than those reported in Refs. [4,6,24]. Though Ref. [6] reported the highest overall accuracy among methods listed in the table, their feature vector size (2048-bit) is extremely long and the highest of all methods, and the reported FAR is higher than our method. Similarly, the method in Ref. [4], though produces higher accuracy, suffers from the highest FAR and FRR of all methods including ours. Considering all performance metrics and the very small feature vector, the proposed method is highly attractive for real-time applications.



Iris feature extraction is an important aspect of many modern security systems. Hence, an efficient and faster approach is important for iris recognition. This chapter proposes a new, hybrid, HWT and MLBPbased technique to reduce the feature size so that the iris images can be matched faster. HWT extracts the most prominent features of the iris, reducing the template size. In this work, a three-level HWT is applied to extract the region containing the major information of the iris image. The three-level approximation part resulting from HWT is considered a major characteristics region. For instance, the repeated HWT converts a 64 3 512 normalized iris image into an approximation image of size 8 3 64, which becomes a template of size 1 3 64 after the application of MLBP and XOR. The proposed hybrid HWT and MLBP algorithms are applied on three different iris datasets. Results show that the proposed method reduces the feature length multiple times when compared to the existing methods reported in the literature. This reduced length results in a reduction in computation time. This reduced feature length is at the cost of only a 1% reduction in the accuracy level compared with some previously proposed methods, but it still produces better FAR and FRR values than some methods. Hence, the proposed method is highly attractive for developing a fast and reliable iris recognition system. The results of the proposed new hybrid method have some implications. This work only considers three datasets, so the results may vary when applied to some very large datasets. The results may also vary when noisy, blurred, and distorted iris images are taken into consideration. In the future, iris datasets with more images have to be developed and


Applications of Computational Intelligence in Multi-Disciplinary Research

the HWTMLBP method has to be evaluated for those datasets. The complexity of the proposed method should also be evaluated and compared with the existing techniques. Different ML and deep learning algorithms can also be effectively used in the overall iris recognition process. Finally, research is needed to develop effective biometric systems by combining iris recognition with other biometric features.

References [1] J. Daugman, C. Downing, Epigenetic randomness, complexity and singularity of human iris patterns, Proceedings of the Royal Society of London. Series B: Biological Sciences 268 (1477) (2001) 17371740. [2] L. Flom, A. Safir, Iris recognition system. Google Patents, (1987). [3] L. Ma, T. Tan, Y. Wang, D. Zhang, Personal identification based on iris texture analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (12) (2003) 15191533. [4] L. Ma, Y. Wang, T. Tan, Iris recognition using circular symmetric filters, Object Recognition Supported by User Interaction for Service, Robots 2 (2002) 414417. [5] D.M. Monro, S. Rakshit, D. Zhang, DCT-based iris recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence 29 (4) (2007) 586595. [6] J.G. Daugman, High confidence visual recognition of persons by a test of statistical independence, IEEE Transactions on Pattern Analysis and Machine Intelligence 15 (11) (1993) 11481161. [7] N. Ahmadi, G. Akbarizadeh, Hybrid robust iris recognition approach using iris image pre-processing, two-dimensional gabor features and multi-layer perceptron neural network/PSO, IET Biometrics 7 (2) (2017) 153162. [8] X. Li, Y. Jiang, M. Chen, F. Li, Research on iris image encryption based on deep learning, EURASIP Journal on Image and Video Processing 2018 (1) (2018) 110. [9] L. Masek, Recognition of human iris patterns for biometric identification, Bachelor’s Thesis, University of Western Australia (2003). [10] H. Proenca, L.A. Alexandre, Toward noncooperative iris recognition: a classification approach using multiple signatures, IEEE Transactions on Pattern Analysis and Machine Intelligence 29 (4) (2007) 607612. [11] R.F. Soliman, M. Amin, F.E. Abd El-Samie, A novel cancelable iris recognition approach, in: International Conference on Innovative Computing and Communications, Springer, Singapore, 2019, pp. 359368. [12] S. Umer, B.C. Dhara, B. Chanda, Texture code matrix-based multi-instance iris recognition, Pattern Analysis and Applications 19 (1) (2016) 283295. [13] Y. Alvarez-Betancourt, M. Garcia-Silvente, A keypoints-based feature extraction method for iris recognition under variable image quality conditions, Knowledge-Based Systems 92 (2016) 169182. [14] S.S. Barpanda, B. Majhi, P.K. Sa, A.K. Sangaiah, S. Bakshi, Iris feature extraction through wavelet mel-frequency cepstrum coefficients, Optics & Laser Technology 110 (2019) 1323. [15] S.S. Barpanda, P.K. Sa, O. Marques, B. Majhi, S. Bakshi, Iris recognition with tunable filter bank based feature, Multimedia Tools and Applications 77 (6) (2018) 76377674. [16] D. de Martin-Roche, C. Sanchez-Avila, R. Sanchez-Reillo, Iris recognition for biometric identification using dyadic wavelet transform zerocrossing, Proceedings IEEE 35th Annual 2001 International Carnahan Conference on Security Technology (Cat. No.01CH37186) pp. 272-277 (2001). [17] S.S. Dhage, S.S. Hegde, K. Manikantan, S. Ramachandran, DWT-based feature extraction and radon transform based contrast enhancement for improved iris recognition, Procedia Computer Science 45 (2015) 256265. [18] S. Jamaludin, N. Zainal, W.M.D.W. Zaki, Sub-iris technique for non-ideal iris recognition, Arabian Journal for Science and Engineering 43 (12) (2018) 72197228. [19] E.G. Llano, M.S. Garcı´a-Va´zquez, L.M. Zamudio-Fuentes, J.M.C. Vargas, A.A. Ramı´rez-Acosta, Analysis of the improvement on textural information in human iris recognition, In VII Latin American Congress on Biomedical Engineering CLAIB 2016, Bucaramanga, Santander, Colombia, October 26th-28th, 2016 (pp. 373-376). Springer, Singapore (2017). [20] K. Miyazawa, K. Ito, T. Aoki, K. Kobayashi, H. Nakajima, An effective approach for iris recognition using phase-based image matching, IEEE Transactions on Pattern Analysis and Machine Intelligence 30 (10) (2008) 17411756. [21] A. Nigam, B. Kumar, J. Triyar, P. Gupta, Iris recognition using discrete cosine transform and relational measures, in: International Conference on Computer Analysis of Images and Patterns, Springer, Cham, 2015, pp. 506517. [22] B. Sahu, P.K. Sa, S. Bakshi, A.K. Sangaiah, Reducing dense local feature key-points for faster iris recognition, Computers & Electrical Engineering 70 (2018) 939949. [23] K. Seetharaman, R. Ragupathy, LDPC and SHA based iris recognition for image authentication, Egyptian Informatics Journal 13 (3) (2012) 217224. [24] C. Te Chu, C.-H. Chen, High performance iris recognition based on LDA and LPCC, IEEE, 2005, p. 5. [25] B. Zahran, J. Al-Azzeh, Z. Alqadi, M.-A. Al Zoghoul, A modified LBP method to extract features from color images, Journal of Theoretical and Applied Information Technology 96 (10) (2018). [26] S. Bakshi, P.K. Sa, B. Majhi, A novel phase-intensive local pattern for periocular recognition under visible spectrum, Biocybernetics and Biomedical Engineering 35 (1) (2015) 3044.

Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern Chapter | 1


[27] S.A. Khan, M. Ishtiaq, M. Nazir, M. Shaheen, Face recognition under varying expressions and illumination using particle swarm optimization, Journal of Computational Science 28 (2018) 94100. [28] S. Kumar, S. Singh, J. Kumar, Automatic live facial expression detection using genetic algorithm with haar wavelet features and SVM, Wireless Personal Communications 103 (3) (2018) 24352453. [29] E. Owusu, J.D. Abdulai, Y. Zhan, Face detection based on multilayer feed-forward neural network and Haar features, Software: Practice and Experience 49 (1) (2019) 120129. [30] D. Bhattacharjee, A. Seal, S. Ganguly, M. Nasipuri, D.K. Basu, A comparative study of human thermal face recognition based on Haar wavelet transform and local binary pattern, Computational Intelligence and Neuroscience 2012 (2012). [31] S. Shaju, D. Davis, Haar wavelet transform based histogram concatenation model for finger print spoofing detection, In 2017 International Conference on Communication and Signal Processing (ICCSP), pp. 1352-1356, IEEE (2017). [32] N. Ahmadi, M. Nilashi, Iris texture recognition based on multilevel 2-D Haar wavelet decomposition and Hamming distance approach, Journal of Soft Computing and Decision Support Systems 5 (3) (2018) 1620. [33] CASIA-Iris-V4, 5 4, 2020. [34] CASIA-Iris-V1, 5 1, 2020. [35] M.U. Database, MMU1 and MMU2 iris image databases., 2008. [36] M. Oktiana, K. Saddami, F. Arnia, Y. Away, K. Hirai, T. Horiuchi, et al., Advances in cross-spectral iris recognition using integrated gradientface-based normalization, IEEE Access 7 (2019) 130484130494. [37] L. Shuai, L. Yuanning, Z. Xiaodong, H. Guang, C. Jingwei, Z. Qixian, et al., Multi-source feature fusion and entropy feature lightweight neural network for constrained multi-state heterogeneous iris recognition, IEEE Access 8 (2020) 5332153345. [38] N. Ahmadi, Morphological-edge detection approach for the human iris segmentation, Journal of Soft Computing and Decision Support Systems 6 (4) (2019) 1519. [39] X.-h Chen, J.-s Wang, Y.-l Ruan, S.-z Gao, An improved iris recognition method based on discrete cosine transform and gabor wavelet transform algorithm, Engineering Letters 27 (4) (2019). [40] Y. Chen, C. Wu, Y. Wang, T-Center: a novel feature extraction approach towards large-scale iris recognition, IEEE Access 8 (2020) 3236532375. [41] M. Danlami, S. Jamel, S.N. Ramli, S.R.M. Azahari, Comparing the legendre wavelet filter and the gabor wavelet filter for feature extraction based on iris recognition system, IEEE, 2020, pp. 16. [42] G. Huo, H. Guo, Y. Zhang, Q. Zhang, W. Li, B. Li, An effective feature descriptor with Gabor filter and uniform local binary pattern transcoding for Iris recognition, Pattern Recognition and Image Analysis 29 (4) (2019) 688694. [43] R. Jayavadivel, P. Prabaharan, Investigation on automated surveillance monitoring for human identification and recognition using face and iris biometric, Journal of Ambient Intelligence and Humanized Computing (2021) 112. [44] A. Noruzi, M. Mahlouji, A. Shahidinejad, Robust iris recognition in unconstrained environments, Journal of AI and Data Mining 7 (4) (2019) 495506. [45] H.K. Rana, M.S. Azam, M.R. Akhtar, J.M.W. Quinn, M.A. Moni, A fast iris recognition system through optimum feature extraction, PeerJ Computer Science 5 (2019) e184. [46] N. Ahmadi, M. Nilashi, S. Samad, T.A. Rashid, H. Ahmadi, An intelligent method for iris recognition using supervised machine learning techniques, Optics & Laser Technology 120 (2019) 105701. [47] E. Sujatha, J.S.J. Sundar, P. Deivendran, G. Indumathi, Multimodal biometric algorithm using iris, finger vein, finger print with hybrid GA, PSO for authentication, Data Analytics and Management, Springer, 2021, pp. 267283. [48] R. Vyas, T. Kanumuri, G. Sheoran, Cross spectral iris recognition for surveillance based applications, Multimedia Tools and Applications 78 (5) (2019) 56815699. [49] J.J. Winston, D.J. Hemanth, Performance comparison of feature extraction methods for iris recognition, Information Technology and Intelligent Transportation Systems 323 (2020) 62.

This page intentionally left blank

Chapter 2

A novel crypt-intelligent cryptosystem Pratyusa Mukherjee and Chittaranjan Pradhan School of Computer Engineering, KIIT Deemed to be University, Bhubaneshwar, India



Computational intelligence (CI) [1] is a paradigm of natural environmentinspirited computational methodologies, which provides solutions to strenuous real-life problems where mathematical and traditional modeling are rendered useless. It may be impossible to translate several real-life problems into the binary (0 or 1) format due to their stochastic nature or the uncertainties associated with them [2]. It is in these scenarios where CI comes as a rescue to researchers. CI is the extensive umbrella under which three core technologies are present, namely machine learning (ML), genetic algorithms (GAs), and neural networks (NNs), among several others [3]. These technologies either individually or in combination can be effectively utilized to fix crucial real-life issues. One essential dilemma in our daily lives is protecting and ensuring the confidentiality, integrity, and authenticity of the information both in transit and storage from intrusions by unauthorized adversaries. CI has the potential to prove to be one of the obvious choices to contribute extensively to the field of cryptography. These contributions of CI in the field of cryptography have been studied elaborately in this chapter. In the related work survey, the first CI component considered is ML [4,5] which is a subcategory of artificial intelligence and uses a system that learns from its past experiences and modifies its behavior by adapting to changes. As a result, it gigantically minimizes the efforts and time spent on a particular task that has not been explicitly performed. ML can sort millions of files rapidly, thus diagnosing the potentially hazardous ones and automatically extinguishing any threats before it wreaks havoc. Several analogies that have been determined between the tasks of ML and their corresponding counterparts in cryptography are highlighted in this chapter. Next, the literature review of the chapter delves into the application of GAs [6] to cryptography. Cryptographic keys play a pivotal role in protecting the underlying cryptosystem in both symmetric and asymmetric key cryptography. It was observed that GAs have a massive contribution in strengthening the cryptographic keys with selection, crossover, and mutation so that they are immune to detection and threats. Another vital CI component is neural networks. An artificial neural network (ANN) [7] is a datatransforming model that is influenced from the biological nervous systems. The several applications of ANN have been demonstrated in this chapter to design neural cryptosystems. It was noted that CI has avid utilization in the cryptanalysis domain with minimal application in designing the entire cryptosystem. More importance is levied on CI methods to detect frauds and intrusion, instant brute force attacks, and the effective decryption of ciphertexts without other valid information. The use of CI techniques for designing encryption schemes has not received many accolades. Thus toward the end of the chapter, a novel ML and DNA cryptographybased message encryption scheme has been proposed. DNA cryptography [8] is another propitious technology that hides the original data using a combination of the four nitrogenous bases that constitute human DNA. The utilization of four nitrogenous bases, namely Adenosine A, Thymine T, Cytosine C, and Guanine G, instead of the binary counterparts 0 and 1, make the DNA cryptosystems more immune and stronger than the traditional ones. The proffered technique will cater to the encryption of any type of message, text or image, in terms of DNA sequences rather than being restricted to only a particular type of input. In traditional cryptosystems, the obvious requirement of the decryption machine is to be the exact reverse model of the encryption scheme. If an intruder can get hold of the encryption machinery through masquerading attacks, they can assuredly design the corresponding decryption machinery. This chapter suggests a technique that will decrypt the ciphertext using NNs, hence eliminating the age-old practice of cryptosystems. Applications of Computational Intelligence in Multi-Disciplinary Research. DOI: © 2022 Elsevier Inc. All rights reserved.



Applications of Computational Intelligence in Multi-Disciplinary Research

The flow of the chapter is thus designed with the introduction to the topic in brief as described in Section 2.1. It is followed by the related work in Section 2.2 emphasizing each domain of CI individually. Section 2.2.1 talks about the ML contributions in cryptology; similarly, the contributions of GAs are highlighted in Section 2.2.2. Section 2.2.3 portrays the contributions of NNs in cryptology. Each of these subsections is further dissected into contributions in cryptography and cryptology to provide organized literature. Section 2.2.4 gives a background on the basics of DNA cryptography. Section 2.3 provides the motivation of this work by critically identifying the research gap and the formulated objectives to mitigate them. Next, in Section 2.4 a novel ML and DNA cryptographybased message encryption scheme has been provided followed by a NN-based decryption counterpart. Since the model is currently in the implementation stage, Section 2.5 provides a discussion on the same. Finally, this chapter addresses the inference and conclusions drawn, followed by a brief peek into future endeavors of this work in Section 2.5.


Related work

The related work section has traversed several contributions of different CI components into the domain of cryptography.

2.2.1 Machine learning contributions in cryptology ML is a subdivision of artificial intelligence where a machine, preferably a computer, attains the capability of problemsolving based on previously fed data and the statistical analysis of the computer’s behavior on this data. ML thus enables a computer to perform functions for which it was not explicitly programmed in the past. In 1991, Ronald Rivest, in his talk in ASIACRYPT 1991 [9], discussed the analogy, diversity, and impact of the combination of ML and cryptography. ML strengthens security frameworks by comprehensive patterns detection, continuous real-time cybercrime mapping, effective online behavior mapping of users, enhanced intrusion detection, robust decision-making, etc. This section has been further subdivided into first understanding the similarities between ML and cryptography, and then studying the contributions of ML in the field of cryptography and cryptanalysis separately. Analogy between machine learning and cryptography Classification [10,11] is an application of ML where the data is classified into different classes. Consider a scenario where the user wants the computer to detect spam files. In such situations, ML can provide a closer-to-optimum solution where the user feeds in voluminous amounts of spam and nonspam examples and the computer gradually learns to discriminate, classify, and detect spam emails accurately. ML eases this sorting through millions of files and identifies potentially hazardous ones to automatically squash such threats before they turn hazardous. Network vulnerabilities can also be effortlessly scanned using ML. ML preemptively eradicates cyberthreats by continuous pattern detection, realtime mapping, and penetration testing. Regression is another application of ML that predicts the next values based on previous values. It can thus impart the idea of new data using the knowledge of existing data and assist in fraud and intrusion detection. Decision trees can widely be used for this purpose. Chris Sinclair et al. [12] in their work have illustrated how decision trees can be used to categorize nodes into normal or intrusion depending on the IP port address and the system name. Consider a scenario with the following IP port address, system name, and category as given in Table 2.1. Fig. 2.1 gives the corresponding decision tree to detect intrusion and normal nodes. TABLE 2.1 Example intrusion data. IP address

System name




















A novel crypt-intelligent cryptosystem Chapter | 2


FIGURE 2.1 Decision tree based on Table 2.1 for intrusion detection.

Another application of ML is association rule learning or recommendation, which suggests something based on previous experiences. This methodology of recommendation can be highly useful for key generation. Based on past activities and behavioral analysis, recommendation can also be used for entity classification into either a safe user or a fraudulent user. Clustering is another application of ML where the classes are unspecified and groupings are done based on alikeness. Clustering can thus be used in cryptography for forensic analysis and anomaly detection, malware protection, and user behavior analysis. Fig. 2.2 gives a block diagram of the ML algorithms and their corresponding cryptographic applications. Application of machine learning in cryptography ML can be extensively used in cryptography either to design better cryptosystems or classify the traffic and participants. Mutual learning [13] is a phenomenon in which both parties of communication can develop a synergetic confidential key over the insecure communication channel and it will be immune from the intruder who can perform only one-way learning. Rosen-Zvi et al. [14] and Klien et al. [15] have utilized this technique of mutual learning for generating a public key for secret communications Gaikwad and Thool [16] proffered an intrusion diagnosis system based on the bagging ensemble procedure of ML. Buczak and Guven [17] provided an extensive survey of the application of data mining and ML-based methods in cybersecurity. Apart from describing how each ML method such as clustering, decision trees, and Bayesian network can be utilized for intrusion detection, they also elaborately discussed some well-acclaimed cyber datasets that are effective for such detection.

FIGURE 2.2 Machine learning algorithms and corresponding cryptographic applications.


Applications of Computational Intelligence in Multi-Disciplinary Research

Extensive research explains that supervised techniques outshined unsupervised techniques. Among supervised methods, support vector machines and multilayer perceptron performed better. Such performance evaluation comparisons of supervised learning mechanisms have also been illustrated by Manjula C. Belavagi and Balachandra Muniyal [18]. Ozlem Yavanoglu and Murat Aydos [19] conducted a review on the various cybersecurity datasets by discussing each of their advantages and disadvantages. Olga Taran et al. [20] highlighted the utility of ML to detect and filter adversarial attacks based on Kerckhoffs’s Principle, which emphasizes that as long as the key is a secret, the cryptosystem is safe even if every other detail is made public. Application of machine learning in cryptanalysis Mohammed. M. Alani [21] in his work elaborated the fact that ML has wider applications in cryptanalysis than in cryptography. This is because they possess a unanimous ambition of searching in extensive search spaces. ML’s target is to find an optimal solution among numerous possible solutions whereas a cryptanalyst’s propaganda is to find the appropriate key for decipherment. Side-channel attacks [22] are a category of cryptographic attacks, which, instead of exploiting the weakness of the cryptographic algorithm, utilize the information obtained by implementing the algorithm. Timing information, power consumption, electromagnetic leaks, or even sound can prove to be an added source of information that is vulnerable to exploitation. Lerman et al. [23] also supported the usage of the ML approach to boost the precision of side-channel attacks. M. Conti et al. [24] proposed a cryptanalysis scheme where the adversary eavesdrops on the encrypted traffic. The information obtained is then scrutinized using advanced ML techniques to anticipate the activities of the sender with an accuracy of approximately 95%. Maghrebi et al. [25] have also extensively studied the utility of deep learning in sidechannel attacks. Nhan Duy Truong et al. [26] exploited the presence of noise which compromises the integrity of quantum random number generators using a predictive ML-based approach. Jaewoo So [27] proposed a deep learningbased cryptanalysis on lightweight block ciphers by successfully recovering the key as long as it is restricted to 64 ASCII characters. Analysis of existing contributions of machine learning in cryptology From the previous sections, it can be noted that ML has several applications both in cryptography and cryptanalysis. The advantages of these contributions can be summarized as follows. G G


ML has tremendous application in classifying spam files from useful files; It also has wide utility for anomaly detection or intrusion detection to differentiate fraudulent users from genuine users. Thus entity detections are also facilitated by ML; and Malware identification is another major use of ML.

Amid several utilities, certain shortcomings of existing research in this regard can also be noted, which are enlisted below. G G


It is evident that ML has no major contribution in designing encipherment algorithms. In cryptanalysis ML is only primarily used to avidly scrutinize side-channel information to propose side-channel attacks. The quickness of ML is mainly exploited to perform brute force attacks to guess the secret key.

2.2.2 Genetic algorithm contributions in cryptology Cryptographic keys play a pivotal role in information security. It is essential for these keys to be unique and nonrepeating to maintain the integrity and confidentiality of this information. There are two possible approaches to generate such effective keys: one using rigorous mathematics and the others mimicking nature utilizing GA concepts. Table 2.2 illustrates the operators of GAs. Similar to ML, GA also has various contributions in the fields of both cryptography and cryptanalysis. Applications of genetic algorithm in Cryptography Benthany Delman [28] conducted a thorough study on the applications of GA to cryptography. The work took an initial population of probable keys which were binary, then their quality was improved using standard genetic operators [29]

A novel crypt-intelligent cryptosystem Chapter | 2


TABLE 2.2 Operators of genetic algorithm. Operator



Solution with highest fitness level is given preference.


Two most fit solutions are selected and crossover sites are randomly chosen. The portions at the crossover site are exchanged, resulting in new solutions.


Insert or alter random values to maintain diversity.

such as mutation and crossover. The resultant best key was utilized to replace one-time pads (OTPs) in Vernam Ciphers. Ankit Kumar and Kakali Chatterjee [30] demonstrated the use of GAs in the key generation process. In their proposal, the key with the highest fitness function was used to perform the encryption. Muhammad Irshad Nazeer et al. [31] portrayed the importance of ensuring the security of cryptographic keys in order to protect the entire cryptosystem. Thus they utilized GA concepts to strengthen the key. Ragavan M. and K. Prabu [32] proposed a dynamic key generation scheme where population size, chromosome size, fitness function, etc. are all decided based on random procedures, thus strengthening the cryptosystem. Applications of genetic algorithm in cryptanalysis Using the same genetic operators and suitable fitness function, the most probable key or message can also be guessed, thus breaking the entire cryptosystem. Zurab Kochladze and Lali Beselia [33] posed an attack on the MerkelHellman cryptosystem by utilizing the concepts of GAs. A break for the affine cipher where every alphabet is mapped to its numerical equivalent was presented by Yaqeen S. Mezaal and Seevan F. Abdulkareem [34] using GA. Similarly, Emilia Rutkowski and Sheridan Houghten [35] proposed the cryptanalysis of the RSA algorithm by using GAs for integer factorization. Analysis of existing contributions of genetic algorithms in cryptology The positive inference drawn from a detailed analysis of the literature survey of the application of GA in cryptography and cryptanalysis is provided below. G

GAs are mainly used to strengthen randomly generated cryptographic keys as the security of cryptographic keys is most vital to ensure the confidentiality and integrity of the entire cryptosystem. The loophole in the current research is discussed below.


In the cryptanalysis section, GA is mainly showcased to break traditional substitution or transposition ciphers with no major contribution to break dynamic and modern-day ciphers.

2.2.3 Neural network contributions in cryptology Neural cryptography is a domain of security dedicated to the application of fundamental concepts of NNs into cryptography and cryptanalysis. The selective self-learning ability and stochastic behavior of the ANN have carved a niche for its immense application in both cryptography and cryptanalysis. Applications of neural networks in cryptography Traditional cryptography based on rigorous mathematics has prominent shortcomings such as excessive time consumption, computational power requirement, and complexity. ANN-based cryptography offers several advantages such as self-evaluation, quick computation based on generalization, easy implementation, and less complex hardware and software requirements. Adel A. El-Zoghabi et al. [36] have provided a brief insight into the different classes of ANNs and their application into three broad cryptographic categories, namely synchronization NN, chaotic NN, and multilayer NN.


Applications of Computational Intelligence in Multi-Disciplinary Research

The most primitive method of tree parity machines [37] has tremendous application into synchronization-based neural cryptography. It is a multilayer feedforward NN comprising of K 3 N input neurons, K hidden neurons, and one output neuron. A tree parity machine has been illustrated in Fig. 2.3. The inputs Xij can have three possible values, namely 21, 0, and 1. The weights Wij range from 2 M to 1 M. These K 3 N input neurons, K hidden neurons, and the range of weights L give a total of (2M 1 1)KN possibilities, which are impossible to break by brute force attacks. The Output “σi” of the hidden neurons is calculated as the signum function of the sum of the product of each input neuron and its corresponding weight. The signum function for a variable returns 21 if values for that variable are negative, 0 if the value is 0, and 1 if the values are positive. The final output “τ” is computed as the product of all values produced by the hidden elements. After the full synchronization is achieved beginning with random weights, the updated weights can be used as cryptographic keys. M. Rosen-Zvi et al. [38] explored this utilization for tree parity machines in the field of cryptography. Pranita P. Hadke and Swati G. Kale [39] portrayed a review on how NNs and cryptography can be utilized together for information security. Sayantica Pattanayak and Simone A. Ludwig [40] elaborated on an NN cryptographybased model to perform encryption or decryption. The privacy and secrecy of this model are based on the fact that an attacker with an identical NN model is unlikely to pose a threat to the system. They also analyzed the effects of several parameters such as different learning rates, optimizers, and step values. Applications of neural networks in cryptanalysis The ability of NNs to critically explore the possible solutions to a problem makes them an important tool for cryptanalysis. The assurance that any function can be replicated by NNs is another precious computation tool that can be utilized to estimate the inverse function of any cryptographic algorithm. Mohammad M. Alani [41] proposed a successful break of DES and Triple DES using the known-plaintext attack based on NNs. He used a multilayer feedforward network to retrieve the corresponding plaintext from an input ciphertext by suitably training the neurons using adequate plaintextciphertext pairs. Riccardo Focardi and Flaminia L. Luccio [42] highlighted the application of NNs to identify the cipher weakness and then exploit it to guess the probable key. They demonstrated their proposal on Caesar cipher, Vigenere cipher, and substitution ciphers and levied the utility of NNs to speed up the traditional chosen ciphertext attack and chosen plaintext attack. Ya Xiao et al. [43] critically analyzed the metrics and methodology of applying neural cryptanalysis on cyber-physical systems. Analysis of contribution of neural network in cryptology The critical analysis of contributions of NNs in cryptology is given as follows. G

The contribution of NNs is mainly observed in neural cryptography where synchronization NNs and tree parity machines are mainly exploited to ephemerally use the self-learned and updated synaptic weights as symmetric or asymmetric keys depending on the number of NNs used. The demerits in the existing literature are highlighted below. FIGURE 2.3 Tree parity machine.

A novel crypt-intelligent cryptosystem Chapter | 2



There are very few contributions of NNs in cryptanalysis that are mainly restricted to breaking the well-acclaimed DES algorithm.

2.2.4 Background of DNA cryptography DNA cryptography [44] is a category of cryptographic schemes where every alphabet, number, or special character of the input is translated to one of the different combinations of the four nitrogenous bases  A, T, C, and G. Traditional cryptosystems use binary keys that comprise only 0 and 1 and hence only have an exponential power of two. On the contrary, DNA codes have an exponential power of 4 due to four bases. This makes a single bit key using DNA encryption eight times stronger. Also, the complexity and randomness of DNA nucleotides, high storage capacity, and parallel processing provide additional security. Thus the concept of incorporating DNA into cryptography has been recognized as a conceivable technology that brings a new aim to enhance the robustness of algorithms. K S Sajisha and Sheena Mathew [45] proposed a new technique to provide triple-layer security to secret messages. The original message is first DNA encoded, then the DNA-based AES algorithm is applied to it, and finally, the intermediate ciphertext is concealed in another DNA sequence to obtain the final ciphertext. Mazhar Karimi and Waleej Haider [46] emphasize the usage of biological procedures and the fundamental concepts of DNA for secure key generation. Mandrita Mondal and Kumar S. Ray [47] highlighted that DNA molecules have the capacity to store, process, and transmit information. They also reviewed the current trends in DNA cryptography in detail. Md. Rafiul Biswas et al. [48] proffered a novel technique of DNA cryptography based on dynamic sequence table and dynamic DNA encoding. To attain dynamism, the initial randomly assigned DNA base sequences are rearranged iteratively following a mathematical series. Analysis of existing work in DNA cryptography The analysis of the existing literature on DNA cryptography is summarized below. G




The basic DNA cryptosystems use simple substitution methods using a predecided lookup table or DNA dictionary. They are prone to chosen-ciphertext and known-plaintext attacks, where, if the adversary gets hold on some plaintextciphertext pairs, they can easily generate the substitution table at their end, thus breaking the entire system. The DNA encryption schemes utilizing biological operations have a tendency to work beyond human control. Also, biological operations are quite expensive and complex, which will unnecessarily increase the complexity of the system. The combination of mathematical and biological operations is most popular but most of the existing schemes are using DNA cryptography as an additional layer to simply DNA encode the intermediate ciphertext obtained. Also, major contributions of DNA cryptography till now have been more prevalent for text inputs. Very little work has been done on image inputs or other forms of input.


Proposed methodology

One of the sole proposed objectives of this work is to explore the candidature of ML in designing the encryption algorithm to make the cryptosystem dynamic enough to tackle different kinds of input, classify them, and, depending on their type, follow the necessary encipherment techniques to generate corresponding ciphertexts. Another main proposed objective of this work is to first generate random keys, then perform the fitness test and the gap test on them. If the keys are found to be weaker, they are strengthened by performing suitable genetic operations. Hence another proposed objective of this work is to eliminate the belief that decryption machinery is the exact reverse of the encryption machinery. An NN-based decipherment procedure is proposed to learn and generate the plaintext from the input ciphertext. Thus the prime emphasis of this work is to offer a proposed DNA cryptographybased encryption scheme capable of enciphering both text and image inputs. After analyzing the research gap in the existing contributions of CI components in cryptology and present literature of DNA cryptography schemes, the proposed objectives of this work are as follows: G G

To design a DNA cryptography and MLbased encryption methodology to encipher both text and input messages; To strengthen weak keys if generated randomly using GAs to make them immune from untoward intrusions;



Applications of Computational Intelligence in Multi-Disciplinary Research

To design an appropriate decryption scheme using NNs that is not the obvious reverse of the encryption scheme to enhance the security of the cryptosystem The proposed methodology is segregated into two sections: encryption and decryption.

2.3.1 Proposed encryption scheme The encryption machinery has two layers. In the first, an input is classified as either text or image using Classification and Regression Trees (CARTs), after which a mathematicalbiological combination-based DNA encryption is performed. The resultant ciphertext is considered the intermediate cipher. Next, a random key is generated, and frequency test and gap test are performed on the same. If the randomly generated key fails the test, it is strengthened using genetic operators. Finally, the strong key is DNA translated. The final step of the encryption site is to XOR the intermediate ciphertext and DNA-encoded key to generate the final ciphertext. Fig. 2.4 gives the overall block diagram of the encryption machinery The elaborate classification of the input message using CART decision trees is demonstrated in Fig. 2.5. After classification, the steps illustrated in Fig. 2.6 are followed to obtain the intermediate ciphertext. Using concepts of ML, the encryption machinery is taught to automatically follow the prescribed steps depending upon the input.

FIGURE 2.4 Block diagram of the proposed encryption scheme. FIGURE 2.5 Classification of input message using CART decision trees.

FIGURE 2.6 DNA encryption mechanism.

A novel crypt-intelligent cryptosystem Chapter | 2


First, the input message is binary encoded. If it is an image input, the pixels are arranged in a binary-encoded matrix form. After this, in the case of text messages, element manipulation is performed using the random substitution mechanism, and in the case of image inputs, matrix manipulation is performed by using row shifting and column alteration. Next, DNA encoding is performed for this manipulated input. Start and stop primers are selected to perform the biological operation such as polymerase chain reaction and DNA hybridization. The output of this step is translated into the amino acid form which becomes the intermediate ciphertext. To make the encryption more secure and immune from attacks, the concept of cryptography is used. As per Kirchhoff’s Principle, it is believed that the cryptosystem is still secure even if everything apart from the key is made public. After the binary cryptographic key is randomly generated, its fitness is evaluated. Two fitness tests are performed. First is the frequency test where the occurrence of each element must be uniformly distributed. Second, the gap test is performed, which tracks the number of other elements in between two successive occurrences of a particular element. For binary keys, keys with all 0s, all 1s, or more number of 0s and less number of 1s or vice versa are treated as weak keys in the frequency test. The gap test mandates that the maximum gap between two successive occurrences of a particular element be three to achieve a minimal correlation between the adjacent elements. If the randomly generated keys fail the fitness tests, genetic operators such as selection, crossover, and mutation are used to make stronger keys fulfilling the fitness tests. After strengthening the binary key, they are DNA encoded and ultimately translated into amino acid form. The intermediate ciphertext and the final key are then XORed to generate the final ciphertext. Thus in the entire encryption methodology, the parameters of manipulation operation, primer information, DNA encoding rules, and cryptographic keys all play a pivotal role in ascertaining the confidentiality and integrity of the system.

2.3.2 Proposed decryption scheme As per Shannon’s Maxim, it is believed that the intruder has good knowledge of the cryptosystem. In traditional cryptography, there is a myth that the decryption machinery is the exact reverse of the encryption side. Using NNs, the proposed decryption mechanisms try to eliminate this obvious belief related to decryption. To train the NN, several plaintextciphertext pairs encrypted using the same mechanism are fed. The NN learns to generate the correct corresponding plaintext from the input ciphertext on its own. Fig. 2.7 gives the overall block diagram of the decryption machinery



The proposed encryption and decryption mechanism inculcated the basic concepts of CI with DNA cryptography to enhance the security of the system. The proposed encipherment scheme can tackle both text and image inputs without being restricted to any one type. This adds to the dynamism of the cryptosystem. Several parameters in the encryption site are kept as a secret such as the manipulating factors to add confusion into the input, the specifications pertaining to the biological operation, and most importantly the cryptographic key. The cryptosystem on its own first classifies the input using CART decision trees and then follows the encryption steps. The proposed scheme utilizes the security of mathematical and biological operationsbased DNA encryption. To add to the immunity from adversarial attacks, two-layer encryption has been proposed. The DNA digital encoding rules followed by the sender are also kept as a secret. The intermediate ciphertext is XORed with the cryptographic key to generate the final ciphertext, which is in the amino acid form, thus ensuring complete confidentiality and integrity. No information is shared by the sender to the receiver over the communication channel regarding the secret parameters. Hence there is no harm from eavesdropping by evildoers. An NNbased approach is employed where several FIGURE 2.7 Block diagram of the proposed decryption scheme.


Applications of Computational Intelligence in Multi-Disciplinary Research

previously encrypted plaintextciphertext pairs are used to train the model. The decipherment mechanism thus is capable of generating the actual plaintext from the input ciphertext without the need for any other vital information. Hence, from the proposed methodology, it is evident that combining CI components with DNA cryptography will ensure full secrecy of the communication between the sender and the receiver.


Conclusion and future work

CI and its principal components can have a myriad of contributions in cryptology. This chapter vehemently studies the application of ML, GAs, and NNs in both cryptography and cryptanalysis. DNA cryptography is another innovative domain in the field of security that translates the input messages into DNA-encoded form using an array of mathematical and/or biological operations. Though it is still in the infancy stage, its application has a huge impact in the field of security. After carefully analyzing the existing literature, the loopholes have been identified to propose the prime objectives of this work. An ML and DNA cryptographybased encryption mechanism has been proposed that is dynamic enough to tackle any kind of input, whether it is text or image, and convert it into the amino acid form after a series of steps. To add to the confidentiality and integrity, randomly generated cryptographic keys are considered, their fitness is tested, and, if they are found to be weak, are strengthened using GAs and are XORed with the intermediate ciphertext to generate the final ciphertext. This double-layer encipherment scheme enhances the secrecy and privacy of the cryptosystem and renders it less prone to intrusion. Also, the entire cryptosystem has a combination of biological and mathematical keys responsible for its goodwill. Engagement of NN-based decryption mechanisms eliminates the need for any information exchange between the sender and receiver regarding the secret parameters. Thus it is rendered safe from eavesdropping. Utilizing several plaintextciphertext pairs as training data, the decipherment is automatically done to get the original message from the input ciphertext. Thus the proposed scheme has the capability to ensure secure information transmission using a combination of CI and DNA cryptography. The implementation of the proposed scheme is currently under process, and the exact details about the DNAencoding schemes, choice of primers, list of biological operations performed, the nitty-gritty of genetic operators, and training of the NNs have been left for the future endeavors of this work.

References [1] A.P. Engelbrecht, Computational Intelligence: An Introduction, John Wiley & Sons, 2007. [2] N. Siddique, H. Adeli, Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing, John Wiley & Sons, 2013. [3] R. Kruse, C. Borgelt, C. Braune, S. Mostaghim, M. Steinbrecher, Computational Intelligence: A Methodological Introduction, Springer, 2016. [4] E. Alpaydin, Introduction to Machine Learning, MIT Press, 2020. [5] M. Mohri, A. Rostamizadeh, A. Talwalkar, Foundations of Machine Learning, MIT Press, 2018. [6] S. Kumar, S. Jain, H. Sharma, Genetic algorithms, in: Advances in Swarm Intelligence for Optimizing Problems in Computer Science, 2018, pp. 2752. [7] S. Shanmuganathan, Artificial neural network modelling: an introduction, Artificial Neural Network Modelling, Springer, Cham, 2016, pp. 114. [8] C. Popovici, Aspects of DNA cryptography, Annals of the University of Craiova-Mathematics and Computer Science Series 37 (3) (2010) 147151. [9] R.L. Rivest, Cryptography and machine learning, International Conference on the Theory and Application of Cryptology, Springer, Berlin, Heidelberg, 1991, pp. 427439. [10] A.A. Soofi, A. Awan, Classification techniques in machine learning: applications and issues, Journal of Basic and Applied Sciences 13 (2017) 459465. [11] N. Siddique, H. Adeli, Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing, John Wiley & Sons, 2013. [12] C. Sinclair, L. Pierce, S. Matzner, An application of machine learning to network intrusion detection, in: Proceedings 15th Annual Computer Security Applications Conference (ACSAC’99), IEEE, December 1999, pp. 371377. [13] Y. Zhang, T. Xiang, T.M. Hospedales, H. Lu, Deep mutual learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 43204328. [14] M. Rosen-Zvi, E. Klein, I. Kanter, W. Kinzel, Mutual learning in a tree parity machine and its application to cryptography, Physical Review E 66 (6) (2002) 066135. [15] E. Klein, R. Mislovaty, I. Kanter, A. Ruttor, W. Kinzel, Synchronization of neural networks by mutual learning and its application to cryptography, Advances in Neural Information Processing Systems (2005) 689696.

A novel crypt-intelligent cryptosystem Chapter | 2


[16] D.P. Gaikwad, R.C. Thool, Intrusion detection system using bagging ensemble method of machine learning, in: 2015 International Conference on Computing Communication Control and Automation, IEEE, February 2015, pp. 291295. [17] A.L. Buczak, E. Guven, A survey of data mining and machine learning methods for cyber security intrusion detection, IEEE Communications Surveys & Tutorials 18 (2) (2015) 11531176. [18] M.C. Belavagi, B. Muniyal, Performance evaluation of supervised machine learning algorithms for intrusion detection, Procedia Computer Science 89 (2016) (2016) 117123. [19] O. Yavanoglu, M. Aydos, A review on cyber security datasets for machine learning algorithms, in: 2017 IEEE International Conference on Big Data (Big Data), IEEE, December 2017, pp. 21862193. [20] O. Taran, S. Rezaeifar, S. Voloshynovskiy, Bridging machine learning and cryptography in defence against adversarial attacks, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018. [21] M.M. Alani, Applications of machine learning in cryptography: a survey, in: Proceedings of the 3rd International Conference on Cryptography, Security and Privacy, January 2019, pp. 2327. [22] [23] L. Lerman, G. Bontempi, O. Markowitch, Side Channel Attack: An Approach Based on Machine Learning. Center for Advanced Security Research Darmstadt, 2011, pp. 2941. [24] M. Conti, L.V. Mancini, R. Spolaor, N.V. Verde, Analyzing android encrypted network traffic to identify user actions, IEEE Transactions on Information Forensics and Security 11 (1) (2015) 114125. [25] H. Maghrebi, T. Portigliatti, E. Prouff, Breaking cryptographic implementations using deep learning techniques, in: International Conference on Security, Privacy, and Applied Cryptography Engineering, Springer, Cham, December 2016, pp. 326. [26] N.D. Truong, J.Y. Haw, S.M. Assad, P.K. Lam, O. Kavehei, Machine learning cryptanalysis of a quantum random number generator, IEEE Transactions on Information Forensics and Security 14 (2) (2018) 403414. [27] J. So, Deep learning-based cryptanalysis of lightweight block ciphers, Security and Communication Networks 2020 (2020). [28] B. Delman, Genetic Algorithms in Cryptography, published in web. July (2004). [29] X. Yao, An empirical study of genetic operators in genetic algorithms, Microprocessing and Microprogramming 38 (15) (1993) 707714. [30] A. Kumar, K. Chatterjee, An efficient stream cipher using genetic algorithm, in: 2016 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), IEEE, March 2016, pp. 23222326. [31] M.I. Nazeer, G.A. Mallah, R. Bhatra, R.A. Memon, Implication of genetic algorithm in cryptography to enhance security, International Journal of Advanced Computer Science and Applications 9 (6) (2018) 375379. [32] M. Ragavan, K. Prabu, Dynamic key generation for cryptographic process using genetic algorithm, International Journal of Computer Science and Information Security (IJCSIS) 17 (4) (2019). [33] L. Beselia, Using genetic algorithm for cryptanalysis cryptoalgorithm Merkle-Hellman, Computer Science and Telecommunications 48 (2) (2016) 4953. [34] Y.S. Mezaal, S.F. Abdulkareem, Affine cipher cryptanalysis using genetic algorithms, JP Journal of Algebra, Number Theory and Applications 39 (5) (2017) 785802. [35] E. Rutkowski, S. Houghten, Cryptanalysis of RSA: integer prime factorization using genetic algorithms, in: 2020 IEEE Congress on Evolutionary Computation (CEC), IEEE, July 2020, pp. 18. [36] A. El-Zoghabi, A.H. Yassin, H.H. Hussien, Survey report on cryptography based on neural network, International Journal of Emerging Technology and Advanced Engineering 3 (12) (2013) 456462. [37] [38] M. Rosen-Zvi, E. Klein, I. Kanter, W. Kinzel, Mutual learning in a tree parity machine and its application to cryptography, Physical Review E 66 (6) (2002) 066135. [39] P.P. Hadke, S.G. Kale, Use of neural networks in cryptography: a review, in: 2016 World Conference on Futuristic Trends in Research and Innovation for Social Welfare (Startup Conclave), IEEE, February 2016, pp. 14. [40] S. Pattanayak, S.A. Ludwig, Encryption based on neural cryptography, in: International Conference on Health Information Science, Springer, Cham, December 2017, pp. 321330. [41] M.M. Alani, Neuro-cryptanalysis of des and triple-des, in: International Conference on Neural Information Processing, Springer, Berlin, Heidelberg, November 2012, pp. 637646. [42] R. Focardi, F.L. Luccio, Neural cryptanalysis of classical ciphers, In ICTCS (2018) 104115. [43] Y. Xiao, Q. Hao, D.D. Yao, Neural cryptanalysis: metrics, methodology, and applications in CPS ciphers, in: 2019 IEEE Conference on Dependable and Secure Computing (DSC), IEEE, November 2019, pp. 18. [44] A. Gehani, T. LaBean, J. Reif, DNA-based cryptography, Aspects of Molecular Computing, Springer, Berlin, Heidelberg, 2003, pp. 167188. [45] K.S. Sajisha, S. Mathew, An encryption based on DNA cryptography and steganography, in: 2017 International Conference of Electronics, Communication and Aerospace Technology (ICECA), vol. 2, IEEE, April 2017, pp. 162167. [46] M. Karimi, W. Haider, Cryptography using DNA nucleotides, International Journal of Computer Applications 168 (7) (2017) 1618. [47] M. Mondal, K.S. Ray, Review on DNA cryptography, arXiv preprint arXiv 1904 (2019) 05528. [48] M.R. Biswas, K.M.R. Alam, S. Tamura, Y. Morimoto, A technique for DNA cryptography based on dynamic mechanisms, Journal of Information Security and Applications 48 (2019) 102363.

This page intentionally left blank

Chapter 3

Behavioral malware detection and classification using deep learning approaches T. Poongodi1, T. Lucia Agnes Beena2, D. Sumathi3 and P. Suresh4 1

School of Computing Science and Engineering, Galgotias University, Greater Noida, India, 2Department of Information Technology, St. Josephs

College, Tiruchirappalli, India, 3SCOPE, VIT-AP University, Amaravati, India, 4School of Mechanical Engineering, Galgotias University, Greater Noida, India



Internet of Things (IoT) is a ubiquitous technology that facilitates communication among connected IoT devices or smart objects and supports people in their everyday activities. The IoT paradigm enables interaction among the physical and digital environment using sensors, actuators, transceivers, and processors. The devices are assigned a unique IP address through which the devices can interact with other external entities. IoT devices are embedded with sensors and use the application programming interface (API) to establish a connection and transfer the data to the other connected devices. It is a type of microcomputer that is commonly domain specific. Gartner’s report says that the number of connected IoT devices in diverse technical domains will reach 1.0 trillion by the year 2025. Security issues are also increasing due to the enormous growth of IoT systems. With the remarkable benefits of IoT devices, smart devices have become too close to human life. Meanwhile, a huge number of malware varieties are also automatically generated every day. There is an exponential growth of malware attacks in the systems in the IoT environment, which exploits the infected systems to harm other connected devices. For instance, Trojan.Mirai.1 can attack Windows hosts directly and use the infected hosts to harm other devices. The confidential information available in the infected host can be easily stolen and the host is converted into a botnet for the easiest launching of the Distributed Denial of Service (DDoS) attack. The malware attacks that are already available in the traditional systems may also propagate to the other connected IoT devices. Recent advancements in IoT technologies require a secured environment to enable easy accessibility among heterogeneous smart devices. The number of users is rapidly increasing and the systems are exposed to different threats, including data tampering by the injection of malicious codes and the introduction of attacks on IoT infrastructure and devices. Malware detection approaches are significant in the IoT environment for the continuous learning and monitoring of smart devices. IoT systems are susceptible to various attacks such as jamming, spoofing, eavesdropping, and Denial of Service (DoS) attacks. Privacy protection is essential to prevent vulnerabilities, and hence the accessibility features permit easy and rapid access to miniaturized devices. To minimize the impact of security threats, learning models and data interpretation are required for persistent learning. Initially, the metadata is normalized based on the value assigned such as device ID, location, time, state, and behavior. There is a never-ending battle between security analytical experts and malware developers since the complexity of malware evolves rapidly. The recent state-of-the-art innovative research development of deep learning techniques for malware detection is focused. This chapter provides a systematic overview of deep learning techniques, in particular, for malware detection and classification. The significant contributions provide a complete picture of the characteristics and methodologies in the conventional machine learning (ML) workflow with its challenges and limitations for malware detection. The recent developments are analyzed comprehensively by emphasizing deep learning approaches. Moreover, it helps to understand malware detection with the recent research innovations and future directions of Applications of Computational Intelligence in Multi-Disciplinary Research. DOI: © 2022 Elsevier Inc. All rights reserved.



Applications of Computational Intelligence in Multi-Disciplinary Research

research. Various feature types and selection techniques of ML algorithms are exploited for malware detection. Classification algorithms such as artificial neural networks (ANNs), Bayesian networks, support vector machines (SVMs), and decision tree algorithms are covered to provide a conceptual view about the malware detection process and to end with an effective decision. The deep learning solutions have substituted the conventional engineering process with a complete training system for malware detection. The process is initiated by providing a raw input and it proceeds till the final output is obtained. Hence, deep learning is considered to be an enhanced model of neural networks which has outpaced conventional ML algorithms. Moreover, due to the presence of deep hidden layers, the model would be able to apprehend the complexlevel representations of several features. When the data size is huge, ML algorithms are found to produce the outputs with ambiguity while the deep learning approaches determine the new patterns and build relationships with the identified patterns to enrich the performance of tasks. This chapter explores complex deep learning architecture frameworks to achieve better performance. Section 3.2 describes deep learning strategies for malware detection, which includes static, dynamic, and hybrid analysis. In Section 3.3, the architecture of convolutional neural networks (CNNs) for malware detection is discussed. Section 3.4 covers the comparative analysis of various CNN approaches, and the chapter is concluded with research challenges and future directions.

3.1.1 Digital forensics—malware detection Digital forensics science is an evolving technology that assists in analyzing and acquiring digital evidence if the system security is broken. Unprocessed, unanalyzed, and unimaged digital content is maintained in evidence lockers that increase the forensic backlog exponentially. Digitally committed crimes such as sharing illegitimate content, credit card fraudulent activities, and phishing widen far beyond traditional criminal activities such as financial fraud, stalking, and murder. The data volume of each case is constantly growing, but the trained personnel include court-admissible, an expert reproducible analysis is limited which is required to proceed with forensic processing. Police forces are being implemented as a triage model to secure the integrity of collected forensic evidence and proceed with evidence seizure. This paradigm trains officers to become experts or professionals in handling several crime scenarios for the processing cases. Initially, the responders are not required to be experts in the investigation or analysis phase, rather, courtadmissibility and integrity can be ensured in the aspect of evidence. Due to the increase in the utilization of physical resources required to train responders, the hardware, and the capacity of skilled personnel, the research committee has recognized that there is a necessity for intelligent and automated processing in digital forensics. The highly skilled and labor-intensive tasks in the forensic investigation are required in malware analysis. The steps carried out in digital forensics are depicted in Fig. 3.1. The most familiar technique for malware analysis is running the malware in a virtual machine or sandbox to obtain deep insight into the payload installation, network communications, attack vector, and behavioral analysis by exploiting multiple snapshots during the analysis in the complete malware lifecycle. It was found in a survey that users are overwhelmed due to the technical background analysis required for using forensic tools. It is considered a high blockade for digital investigators to enrich their skill set for becoming an expert in extra topics, including malware analysis. Furthermore, artificial intelligence (AI) techniques are converged with evidence processing in digital forensics, which has a significant advantage in the automation processes that certainly support digital investigators. It assists in reducing the backlog and expediting the investigation process by avoiding prejudice and bias. Numerous approaches are available to promote the forensic investigation process using AI techniques, big data processing, and automation. The IoT paradigm comprises heterogeneous devices that are facing numerous cybercrime issues as there are no standard digital forensics techniques available. Some potential solutions are available, but they may not be appropriate for all kinds of frameworks in the forensic community. The digital forensic processes may interrupt the services for analysis due to the gathering of digital evidence. The victim’s infected machine will be investigated by the forensic team in the following ways: G G

Gathering digital evidence Analysis of digital evidence by the forensic team.

Initially, the team will examine the Random Access Memory (RAM) and images in .dd format will be sent to sterile media. The team gathers log files, internet history, registry files, and volatile data including DLL files, network connections, and the list of executing processes from the infected machine. The digital evidence integrity is maintained by creating the hash value of the digital fingerprint as evidence throughout the complete investigation process. Later, the analysis

Behavioral malware detection and classification using deep learning approaches Chapter | 3


FIGURE 3.1 Steps in digital forensics.

can be carried out by exploiting three different evidence sources, namely event logs, memory dump, and registry. The storage capabilities related to cybercrime cases are rising exponentially. The probability of prosecution is low because of the uncertainty of the digital image that is portrayed in determining the victim’s age [1]. A backlog is eminent; relevant experts are required for analyzing an offense, and the digital forensic processes are also too laborious. These factors are persistently influencing the performance of forensics laboratories [2]. There is a huge demand for nonexpertise tools for evidence identification and analysis. To avoid delay in evidence processing in law enforcement actions across the world, nonexpertise detective tools are required to accomplish a preliminary analysis of the collected evidence. In the malware detection system, adequate data is required for the easy identification of malware. With the knowledge acquired from analyzing malware domains, the different malware patterns are recognized, which supports investigators in dealing with the case by constructing a robust profile. Some of the common techniques include entropy analysis, time stamping, and pattern traceability can be accomplished with the help of identifiers and keywords. Entropy analysis: In most cybercrime cases, the infected files will be in the form of executable files. In a few cases, it is highly complex to identify the infected files since they may be hidden in system folders or recycle bins or indices. Time stamping: It refers to the duration from the time the attack is started until the starting time of the investigation phase of malware infection. Pattern traceability: Using identifiers and keywords, the malware code can be identified. E-mail addresses, IP addresses, and other information can be acquired with the identifier scanning process in a communication pattern. An example includes “key logger,” which can be exploited as a hint to detect the malware attacks that exist in the system. At the end of the investigation phase, the outcomes are yielded in three forms: hit ratio, false positive, or false negative. Hit ratio: It occurs once the system identifies the malware accurately; it is certainly possible if the signature of a specific malicious code matches with the sample available in the database. False positive: It means that a scanner identifies some malware occurrences in a file that is not infected. Moreover, the characteristics of malicious codes are different and the codes are visible in a noninfected file. False negative: It means that a system fails to identify the malicious code in an infected file. It commonly occurs if a scanner is unable to detect the signature sample of a specific malware, hence failing in detecting malware in the infected file.


Applications of Computational Intelligence in Multi-Disciplinary Research

3.1.2 Malware evolution and its taxonomy The availability, diversity, and complexity of malicious programs give rise to infinite challenges in securing computer systems and networks from various attacks. Furthermore, malware is incessantly evolving and engaging researchers and security analysts to improve the cyber defense mechanisms to combat the malware. The malware is disguised rapidly due to the exploitation of metamorphic and polymorphic techniques that support attackers in eluding detection. In the metamorphic malware category, the code is rewritten while it is propagating but the written code is equivalent to an actual one. Attackers exploit several transformation techniques including code expansion, code permutation, code shrinking, inserting garbage codes, and register renaming. In polymorphic malware, a polymorphic engine is used to modify the content without modifying the original functionality. Encryption and packing are the two ways followed to hide the original content. The real code can be hidden by exploiting compression algorithms. During runtime, the original code can be restored in the memory for execution. Cryptanalysts can encrypt and manipulate the code; hence researchers can find difficulty in analyzing the program. A stub can be preferably utilized by the attacker for encrypting and decrypting the malicious code. Both techniques result in a massive volume, which makes investigations by the forensic department of handling malware cases more difficult, time consuming, and expensive. A malware is a malicious software specifically introduced to damage, disrupt, or obtain unauthorized access to a system. A malware can be classified based on the proliferation systems as mentioned below, G







An adware is a kind of malware that automatically introduces online advertisements in such a way the revenue is generated for the developer by exhibiting it on the screen or user interface. The backdoor bypasses the security mechanism built in the system and it is self-configured to permit an attacker to launch an attack. The bot automatically performs certain operations, distributing other malware or launching DDoS attacks. A downloader is a program that allows attackers to download, install, and execute malicious programs. The launcher is particularly designed so that the attacker can quietly launch malicious programs. Ransomware does not allow the user to access the content available in their computer system; instead, it demands a ransom to release a key for unlocking the encrypted files. Rootkit supports attackers to camouflage the existence of malicious content. A spyware is a software that gathers sensitive information from the victim’s system in an unauthorized way. Examples include sniffers, password gravers, and key-loggers. A trojan is a malicious software that conceals itself as a genuine software to deceive users into downloading and installing it. The virus can automatically propagate from one computer system to another. However, viruses will get multiplied depending on human activities. The worm exploits vulnerabilities in the system to spread over. Worms can self-replicate themselves independently to proliferate.

3.1.3 Machine learning techniques for malware analysis ML significantly reduces the labor-intense effort and increases accuracy in malware analysis compared to the conventional approaches. A learning model is applied to a dataset that contains malware samples labeled as either benign or malicious for binary classification or a different group of malwares for multiclass classification. The model learns the distinguishing features among classes with a certain level of accuracy. Highly skilled domain expertise for feature analysis and extraction is mandatory to minimize the time-consuming processes and complexity. The ML approaches can be used for malware analysis, the raw bytecode can be interpreted as a grayscale image where the byte refers to a grayscale pixel and the byte sequences are wrapped into a 2D array. Image classification is considered for malware analysis by employing feature extraction and engineering techniques over that. Domain expertise and robust procedures are required for feature extraction in the malware domain. The ML classifier is executed on a connected gateway that is connected to IoT devices at the enterprise or customer premises. Incoming traffic signals are collected as samples, and the features are extracted and classified using the model constructor. Gateway traffic is trained using class labels, and feature vectors obtained from the database are provided as input to the supervised algorithms. In case malware is identified, the model has to be trained again and the performance compared with that of the existing model. The existing model can be continued if there is performance upgradation; otherwise, the retrained model will be updated in the module. The feature vectors with known malware are stored in the database, which will be updated very often when new malwares are discovered. A set of policies are determined by the

Behavioral malware detection and classification using deep learning approaches Chapter | 3


administrator to trigger a responsive action if any particular traffic is identified as malicious. The sampling module gathers the packet traffic generated by IoT devices and feeds them as raw input to the classifier module, significantly reducing the computational overhead. Malicious traffic detection using ML is depicted in Fig. 3.2.

3.1.4 Behavioral analysis of malware detection Behavioral malware detection in deep learning gathers metadata about malicious activities and identifies malicious codes using deep learning techniques. IoT security can be provided by segregating malicious devices that are accomplishing malicious activities. Behavioral data is collected from heterogeneous devices and a deep learning model is applied to identify persistent malware. The malicious activities are identified depending on the behavior data, and the devices that cause attacking behavior are proactively blocked. In particular, the zero-day attack problem is supplemented through the continuous learning and updating of behavior data. In the long short-term memory model2 (LSTM-2), a malware detection approach is proposed by considering the opcode of diverse applications. LSTM in deep learning has a higher detection rate of malware when compared to decision trees and SVMs [3]. However, the historical data of malware samples are required to defend against constantly evolving malware. To combat an injection attack, an architecture is proposed for cloud users who are normally exposed to malicious attacks. The architecture is robust in safeguarding against malicious attacks [4]. Android-based malware classification methodologies are proposed to categorize the malware by analyzing the source code and access rights available on smart devices [5]. It increases the malware detection rate if it is implemented in only one device, which follows the static analysis methodology, in which it is hard if the malware is generated by several devices simultaneously. A signaling game-based intrusion detection system is proposed to combat the proliferation of malware [6]. In deep learning, the feature type and its extraction are considered to be of key relevance for malware classification. The recent advancements in the ANNs significantly reduce the computational power and improve performance over the conventional ML approaches. The multilayer neural network can be employed for larger training datasets in terms of malware classification, which attracts deep learning researchers’ attention toward it. Specialized tools are usually required to parse or disassemble the malware binary, execute the malware in the virtual machine or sandbox environment, analyze the event log, and perform memory analysis. The behavioral malware detection scheme focuses on the behavioral data collected from smart devices; a learning model is applied to detect the malware. Once a malware is detected, the particular smart device is blocked from further processing. Nodes and servers are available in the infrastructure, every smart device is considered a node, and the server is the controller for the complete infrastructure. The overview of behavioral malware detection is shown in Fig. 3.3. The nodes in the IoT environment acquire behavioral data related to network, process, memory, file system, and system call. By comparing the volume of behavioral data capacity and the size of the transmission rate, the server will be updated periodically. A dataset is built based on the learning model of deep learning techniques. Learning data exploits FIGURE 3.2 Malicious traffic detection using ML.


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 3.3 Overview of behavioral malware detection.

TABLE 3.1 Data required for malware detection. Metadata Time

The time when the behavioral data is generated by IoT devices


Behavioral data type and its corresponding value


Operational status of IoT devices


The value assigned for connecting IoT devices

Device ID

Unique ID of IoT devices

Behavioral data Memory

Read/write data as bytes in memory


PID used for executing process (fork, delete, list, etc.)


Network type, buffer address, server IP, node IP, port

System call

Name, type, variable name, and type

File system

Access rights, file path, file size

PID, Process Identifier.

time, location, behavior, state transition, and device ID. The server sends the trained model to the node to detect the malware and the corresponding node updates with the recently received trained model. If any malicious activities or codes are suspected, then they would be immediately identified by the node. Also, the server identifies in which node the malicious code occurs. Hence, the IoT infrastructure is safeguarded by isolating the infected node. Metadata is gathered by the server continuously to identify the malicious codes available in the dataset. Time, behavior, state, location, and device ID are considered metadata information that helps in detecting the malware. The behavioral data can be gathered according to memory, process, network, system call, and file system to sense the malware. The metadata and behavioral data required for malware detection using deep learning are mentioned in Table 3.1. A behavioral malware detection deep learning framework mainly consists of behavior analysis and malware detection. Also, the cloud platform is a significant module for behavior analysis and malware detection; IoT environment modules comprising smart devices are included. All programs are represented as a behavior graph, which includes multiple API calls and is integrated with system resources. The cloud platform converts the behavioral features to binary vectors, which are given as input to the stacked autoencoder. The hidden layer information is exploited as input to classifiers such as k-Nearest Neighbor (kNN), Decision Trees (DT), Support Vector Machine (SVM), and Naive Bayes

Behavioral malware detection and classification using deep learning approaches Chapter | 3


FIGURE 3.4 Behavioral malware detection—deep learning framework.

(NB). The semantics of malicious activities are learned and the malware is detected effectively. The complete process of behavioral malware detection using deep learning is depicted in Fig. 3.4.


Deep learning strategies for malware detection

Signatures are constructed manually for antivirus toolkits. Analysis of malware techniques eases the task of determining the risk and providing the objectives of the given sample. Analysts must be able to identify the behavior of a sample while the malware creators do the masquerading tasks, which is considered to be a trade-off among them. Hence, the necessity arises to develop innovative analysis tools and techniques that must be capable of determining the existence of the measures in the environment. Thus it results in hiding the malevolent characteristic of a malware. Malware analysis could be divided into four categories, namely static, dynamic, hybrid, and memory analysis.

3.2.1 Feature extraction and data representation First and foremost, a very important step in malware detection is identifying the characteristics of hostile software files. A variety of characteristics of hostile files have been referred to in the literature. Converting a large collection of inputs into a set of attributes is referred to as feature extraction. Feature extraction is done when there is an enormous amount of data. The data may be redundant and irrelevant. The pulling out of specific features necessary to find out whether the given input is malicious or benign is performed in feature extraction. After the feature extraction phase, the file contains a meaningful division of attributes needed for further processing, and also, the computing complexity is reduced. The output of the feature extraction phase is an array comprising the occurrences of the attributes extracted. The performance metrics of a system such as accuracy and efficiency are also influenced by the selection of feature extraction methods. Based on the literature [7], the feature extraction methods are classified as in Fig. 3.5. There are different ways by which a malware is rendered. The two familiar ways of rendering are hex representation and assembly representation. Hex representation is a chain of hexadecimal numbers indicating the machine code. An example chain of hex representation is 0D000401 8D 15 A8 80 63 00 BF 55 70 00 00 52 FF 72 7C 53. The first byte indicates the starting address of the chain in the memory. These bytes represent the instruction code/data. Converting


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 3.5 Classification feature extraction methods.


these hexadecimal numbers into a set of assembly instructions can be performed with the help of the recursive traversal algorithm and the linear sweep algorithm [8]. One of the automatic code analyzers, the Interactive Disassembler (IDA) tool, is an implementation of a recursive traversal algorithm, and it is used for converting hexadecimal numbers to a set of assembly instructions.

3.2.2 Static Analysis The analysis of portable executable (PE) files without them running is known as static analysis. Tools are used to analyze the binary representation of a program. To prevent analysis, the malware makes use of certain binary packers like UPX and ASP Pack Shell [9]. The corresponding files that need an analysis must be first unpacked and should be decompressed. During the compilation of the source code, chances are there for loss of information, which results in the complication of the analysis. File classification would be based on several features that are retrieved from the executable file format. Zero-day malware or anonymous malware could be detected through the PE-Probe method [10]. In this work, detection works after the classification process. The static analysis could be performed with the help of a disassembler, debugger, and program analyzer. It could be done by applying signatures and heuristics. Among the various methodologies available, disassembly is used for the extraction of features such as byte sequence, opcodes, and PE header. Apart from disassembling, other methodologies such as obfuscation, reverse engineering, analysis of string, and limited execution environments are used for code analysis. There are various static analysis tools used for analyzing malware. They are described in Table 3.2. To perform static analysis, ML algorithms are deployed due to the exponential growth in signatures. The factor behind ML malware detection being a successful one is feature engineering. The importance lies in the feature construction that involves the application of the language models, the number of loops, and function calls on the PE headers. When the dataset is huge, feature selection is a challenging task. Researchers have used ML algorithms to select the features. A comparative study of static malware analysis has been presented in Table 3.3. Byte code n-gram features A series of n bytes obtained from malware that forms the signature of the malware is called byte n-gram features. Comparing the n byte series with the binary code of the new input file, new malware can be detected with high

Behavioral malware detection and classification using deep learning approaches Chapter | 3


TABLE 3.2 Static tools and their features. Tools



Ultimate Packer for Executables

Supports all operating systems

The file size is reduced and the masking of malware is done in the form of .jpg or dissemination through emails.


A portable malware assessment tool

A convenient set of XML configuration files and parser is used to determine indicators and classification is done.


Deployed on all operating systems

It is a disassembler and debugger. Analysis of programs that are found to be atrocious is analyzed.

Dependency Walker

Deployed on all operating systems

Capable of locating module dependencies that include infected, forwarded, implicit, and explicit.

Resource Hacker

Free resource extraction utility

Capable of modifying, moving, deleting, and renaming resources that are merged in the executable file.

TABLE 3.3 Static malware analysis—comparison. Algorithm



Multinomial Naive Bayes algorithm [11]

A dataset of 3265 malicious and 1001 benign samples has been used


Naive Bayes, decision trees, support vector machines, and boosting [12]

The study comprises 1971 benign executables and 1651 malignant executables that are in Windows PE format

Boosted J48 proved to be the best detector by showing a receiver operating characteristic curve of 0.996

Learning with Local and Global Consistency—a semisupervised algorithm is used [13]

The 20 newsgroups dataset is used

An accuracy of 8% is obtained

Thirteen classifiers with the help of N-fold cross-validation are considered for the performance. Among the classifiers are J48, random forest, logistic model tree, and Naive Bayes Tree [14]

Dataset is constructed using 11088 malwares from project [15] and 4006 benign programs [16]

Unknown malware has been detected with an accuracy of 96.28% with the five classifiers. Among these, it has been proved that random forest proved to be the best classifier by showing an accuracy of approximation of 97.95%

Deep transfer learning is applied through tasks such as resizing, reshaping, and duplication. These tasks are performed as preprocessing tasks [17]

Three datasets used for evaluation are given below.

Accuracy of 99.67% with a false positive rate of 0.750%, a true positive rate of 99.94%, and an F1 score of 99.73%

1. benchmark malware dataset 2. Microsoft malware dataset 3. A dataset consisting of 16518 benign files and 10639 malicious files

accuracy. Here, 1-gram means comparing byte by byte features. If the compared byte is not in the 256 combinations, the symbol will be included, implying to remove that byte from the input chain. 2-gram means comparing two bytes at a time. So there are 2562 combinations. IDA Pro and kfNgram tools [18] can be used to produce n-gram chains. Opcode n-gram features As a first step, the opcodes have to be removed from the executable files. The assembly language instruction that represents the operation is called an opcode. Some new opcodes that are obtained from the executable files reveal new malicious code, since the opcode represents what type of operations is performed and whether it is performed in the


Applications of Computational Intelligence in Multi-Disciplinary Research

memory or register. Compared to the byte n-gram extraction, opcode n-gram extraction performs effectively and efficiently. As for the state of the art, it was observed that the opcode 2-gram sequence gave the best results [7]. Portable executables The structural information of the PEs reveals whether a file is infected or manipulated. This process needs less computation. The features can be extracted from the file pointer, import section, export section, PE header, and resource directory. The file pointers denote the location of the file in disk or the processor, and the import section lists the type of DLL files used in the file. The functions that are exported are indicated by the Export section. The PE header contains the file size, code size, creation time, and debug size. Using the PE header information, the logical and physical structure of the binary file can be analyzed for malicious code. The resource directory includes the cursor usage and the dialogs that happened in the file. Using the features obtained from the PE, it is possible to identify whether the file is attacked and what type of attack is made. String feature The printable encoded characters in PE and non-PE Executables such as windows, message boxes, library, getversion, getmodulefilename, and getstartupinfo can also help in identifying the malicious code. The string features are extracted from the encoded characters for the identification of a vicious code. When string features are used, they give greater accuracy to the PE features and n-gram features [7].

3.2.3 Dynamic analysis In dynamic analysis, the features are extracted by observing the program’s runtime behavior. Function calls such as system calls, information flow tracking, Windows API calls and the parameters passed between them, and instruction sets are features that show the difference between benign and malicious code. These function calls are mainly used for code reusability. The operating system provides APIs to approach the low-level hardware using system calls to serve the application programs. The API usage is available in the log file. The API call sequence in the log file is taken for analysis to detect the presence or absence of malicious code. In the literature [19], 3-API-call-grams and 4-API-call-grams have been used for analysis. The API call grams are sorted based on the frequency and the low-frequency grams are eliminated. It was found that the accuracy rate of dynamic analysis is high compared to static analysis. Therefore researchers have combined the static and dynamic features and obtained the best accuracy rate. The program’s behavior in course of the runtime can be represented as a directed subgraph using the system calls and the parameters passed during the calls. Comparing the behavior subgraph with the subgraphs of malware families, the code can be checked for genuineness. Significant patterns or signatures are detected since it denotes the malicious features in the file. It is inferred that the dynamic analysis is resistant to malware invasion methods. Various research works have been done earlier and they have been aggregated in Table 3.4.

3.2.4 Hybrid analysis Islam et al. combined both static and dynamic analysis methods to attain optimal performance [27]. Ijaz et al. proposed a malware analysis [28] that extracted around 2300 features from summary information, alteration of registry keys, APIs, and DLLs in the perspective of dynamic analysis. However, around 92 features were extracted from PE files for static analysis. And, the Cuckoo Sandbox was exploited to analyze the extracted features. In this study, the dynamic analysis gave a 94.64% accuracy due to the restrictions imposed on the access of the network, while the static analysis offered 99.36% accuracy. The study found that to efficiently detect malware, it is important to extract the static features in a dynamic environment. The softplus, firseria, and erkern malware families were detected when static and dynamic features are united together [29]. It was observed that the static and dynamic features, when combined, remove the erroneous probabilities, balancing one feature with another feature. They also avoid terrible results from individual features, thereby improving the results of integration.

3.2.5 Image processing techniques The structure of the packed binary samples is constructed by applying the principles of image processing functions. Through this, 2D grayscale images are constructed. Malware could be visualized as an image as shown in Fig. 3.6.

Behavioral malware detection and classification using deep learning approaches Chapter | 3


TABLE 3.4 Dynamic analysis—a comparison. Algorithm



It is implemented through the feature representation by relating the API calls with the arguments. A new sequence is formed by referring to the name of the API call [20].

Here, there is a necessity for a long feature vector, and thus it might result in the loss of the pattern of API sequence.

The whole string is taken and partitioned by a special character. The model is built with the deep belief network that comprises eight layers. The model is trained with the cross-entropy loss [21].

A small dataset with 600 test samples

Achieved accuracy of 98.6%

A combination of CNN and LSTM is used. In this methodology, two CNN layers are recommended. A 3-sized kernel has been deployed to emulate the 3-gram method. Moreover, the time-series sequence is also managed [22].

Samples have been collected from VirusShare [23], Maltrieve [24], and other private sources.

An average of 85.6% in precision and 89.4% recall is obtained.

Heterogeneous features are retrieved from various attributes like category, arguments, and API name. These are fed as input to the model. The principle behind this model is the numerous gated CNN models. The output from this model is managed by the bidirectional LSTM so that the consecutive association of API calls has been shown. [25].

Classification of PE file is made with 12 profitmaking antivirus engines.

An accuracy of 95.33 6 0.40 has been obtained.

A neural network has been suggested in which feature engineering is not supported. Automation of learning is done through the reports obtained for dynamic analysis. Document classification through the presence of word sequence worked as the backbone for the model, which aids in the prediction of whether a malware is malicious or not [26].

The Vendor dataset and Ember dataset are used for analysis

An accuracy of 0.870 on the Vendor dataset and 0.867 on the Ember dataset is obtained.

FIGURE 3.6 Malware representation as an image. 8-bit vector

Gray scale Image

Malware Image

Hashing is an effective way to retrieve data from a given hash value. Two hashing functions, namely cryptographic hashing and robust hashing, are used. Malware images are used to construct a classification model. In this model construction [30], global features are utilized. Moreover, the patterns of features are evaluated that in turn influence the univariate feature and recursive feature for the selection of the best feature. In the next phase, the detection method works on the principle of the probability of each malware family that is to be classified. The model works using two-phase detection. A uniform threshold is used to identify the malware in the first phase and the model is used to do clear classification. A resilient image hashing technique is performed on the malware images. The classification has also been enhanced in terms of accuracy. An innovative framework is suggested to perform the classification process. In this framework, the visualization of the local image and global image and the visualization of the local feature based on the global image is done. The objective of the local feature visualization is to include the local features that are retrieved from the malware. These features are used to create local images. This results in the association of all local features, and it has been observed that the association is denoted in the form of a single image. The model that has been proposed facilitates the visualization of the local feature. The flow of the model [31] is shown in Fig. 3.7. The activities that are performed in the preprocessing stage are the visualization of the local feature and the global image generation. The job of the binary extractor is to construct the global image with the binary information that has


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 3.7 Flow of the model.

been obtained from the bytes. Local feature extractor plays a significant role in retrieving the features from ASM files. Apart from retrieval, visualization is also done. The obfuscation checker is used to identify if the features that are extracted are obscured. If obscured malware is observed, then the GAN executor consumes the local features as the input. If the malware is not obscured, then the local image is constructed. The GAN trainer accepts the global and local images as input. Next, the GAN executor accepts the input in the form of the global image which represents the obfuscated malware, and then the local image is generated as the output. The global and local images thus generated are combined to take the shape of the new image with the help of the image merger. Then, this is fed as input to the CNN trainer. Training the CNN is done during this phase. The training and classification process work on the combined principle of the GAN and CNN is mentioned below: 1. Initially, GAN training and execution are performed. 2. Combination of the global and local image. 3. Training and classification through the deployment of CNN. Malware classification is done through the merged image and the trained CNN. In [32], image processing techniques are used to discover portable document file (PDF) malware. Image visualization techniques are applied to the PDF files so that the conversion of grayscale images is obtained as the output. Several image features that denote the discrete visual characteristics of PDF malware and benign PDF files are retrieved. The classification model is constructed based on ML algorithms. The dataset used here is Contagio PDF malware. From the experimental results, it has been observed that the proposed method is flexible in repelling against reverse impersonation attacks. The method that has been proposed in [33] categorizes the malware family based on its visual features. Opcode sequences retrieved from a few malware samples are used to construct RGB-colored pixels on image matrices. Further, similarities in image matrices are computed. Here, an analysis of packing and obfuscation is done. Computational overheads are reduced by retrieving the opcode sequences from the blocks. The respective blocks are chosen based on essential features like the API and functions. Illustrative images are produced and are used to categorize the anonymous sample. Image matrices are employed to determine the relationship between the images. This comparison is done only after the vectorization of the RGB-colored pixel information of the images and the computation of the pixel matches. Earlier, researchers worked on the implementation of imaging technology to classify malware. These works dealt with a single feature like API call orders, opcodes, or binary files. But it has been observed that these features were not adequate to locate the malware features. Hence, to overcome this issue, the integration of multiple features has been done. As a solution to this, an ML system has been proposed in [34]. This comprises three modules, namely the processing of the data, the decision-making process, and finally the detection of the malware. Features like the opcode sequence, import functions, and grayscale images are provided as input to the decision-making process. Later, classification is done through the deployment of the shared nearest neighbor clustering algorithm. With this method, an accuracy rate of 98.9% has been achieved. A multiclassification technique based on image processing has been devised in [35]. This works on the principle of SVMs. The construction of the feature vector is done with the help of several features like GIST, discrete wavelet transforms, and Gabor wavelet. A classification accuracy of 98.88% has been obtained

Behavioral malware detection and classification using deep learning approaches Chapter | 3


and eight malware families have been categorized. A work based on ensemble learning and multiple features has been proposed in [36]. For this process, the inputs are RGB image matrices, grayscale images, and M-images. CNNs work as the basic method for the bagging ensemble learning method. And, the method is known as a malware homology analysis system. In this the voting technique, learning technique, and selective combination are proposed as the ensemble method. This process consists of four phases. 1. Data preprocessing: Analysis is made on the executable file that has been converted into two preprocessed files, namely the disassembly file and binary file. 2. Feature construction and extraction: Various representations are considered, namely G The symbolization of the arbitrary outline features of malware is done through the binary file. G Control flow represents the instruction order that features the attributes of the malware. G System calls are used to associate the control flow execution. From the process steps, various representations are formed and these are used to provide the complete information about the malware. Grayscale images are achieved from the binary stream file, and the extraction of the control flow opcodes is also done. Apart from these tasks, the generation of RGB images and M-images is done from the control flow opcodes and system call graphs correspondingly. 3. Bagging: The three feature views are considered to be the input. CNN is implemented, through which the method shows remarkable performance in the classification. 4. Ensemble learning reintegration: Integration of the initial results of various ensemble strategies is done to produce the ultimate classification outcome. The next section describes the architecture of CNNs for malware detection.


Architecture of CNNs for malware detection

Every day, thousands of malicious files are created to attack software. Researchers use a wide variety of techniques to design antivirus software to identify malicious files. In recent days, researchers are applying deep learning techniques, especially CNNs, for the detection of malicious files [37]. When a CNN is applied, the sequential process to be followed is shown in Fig. 3.8.

3.3.1 Preprocessing The preprocessing step converts the raw malware file into a matrix containing numeric values for easy processing. Certain difficulties are faced while converting the text into numeric values. Applying CNN at the character level improves the performance. To apply CNN, the compiled source code of the input file is to be converted to a binary file. Then the binary file is converted into an image. Every byte in the malware file denotes a value between 0 and 255, which in turn denotes a pixel in a grayscale image. The grayscale image of the malware belonging to a particular family has the same layout and texture. Even RGB can be used to denote the malware file. In the case of RGB images, a pixel in the malware file is denoted by 3 bytes, each byte being equivalent to red, green, and blue colors. CNN image with size n 3 m as n 3 m 3 1 input volume. The RGB image [10] is reflected as ffi  apgrayscale ffiffiffi  pffiffireflects n= 3 3 m= 3 3 3.

3.3.2 Classification using CNNs The image generated in the preprocessing step is used on CNNs for training. The images may be of a varied size as malware files are of different sizes. The Spatial Pyramid Pooling (SPP) technique can be used between the convolution layer and the fully connected layer to convert the varied input size to a fixed input. The common architecture of CNNs is shown in Fig. 3.9. A simple CNN includes an input layer, convolution layer, max-pooling layer, and output layer. In a 1D convolution layer, the filter slips over the 1D data chain and the best features are retrieved. Thus the features FIGURE 3.8 Overview of a malware detection system.


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 3.9 The common architecture of a CNN with one convolution layer and one densely connected layer.

retrieved from each filter are combined to form a new feature called the feature map. The hyperparameter tuning technique is applied to find out the number of filters to be used to get the feature map. The output is processed by the Rectified Linear Units, an-element by-element nonlinear activation function. The pooling layer shrinks the dimension of the best features by using the min pool, max pool, or average pooling method. The output of the pooling layer is fed to the LSTM to capture the information about the data chain. And the final layer, the fully connected layer, classifies the features obtained from the LSTM. A CNN’s performance can be improved by having a 2D convolution layer, a 2D pooling layer, or a 2D convolution layer and a 3D pooling layer.

3.3.3 Evaluation The evaluation phase discriminates the input files as benign or malware, which in turn estimates the accuracy of the classifies. The CNN produces a 0 or a 1, representing benign and malware files, respectively. In the evaluation process, the test data is fed into the network and the confusion matrix is generated. The accuracy metrics such as recall, precision, F1 score, and false-positive rate are obtained.


Comparative analysis of CNN approaches

Though various ML algorithms are used in detecting malware files, still, there is scope for investigation using new architectures like CNN. Gilbert et al. [37] categorized malware by approaching it in two different ways, viz. CNN and Yoon Kim’s architecture. While applying the CNN, the file’s binary content is converted to grayscale images as it is easy to match the current file with the malware samples. Upon comparing, the images of the malware samples of the same family appear similar. Hence it is easy to classify the new malware files. Even granular changes can be observed through this method. This method classifies the malware files in a faster way compared to other solutions. The automatic detection of significant features from the images made the CNN appropriate for malware detection. Sharma et al. [38] used a 1D convolution for detecting malware that has simple computation. It outperforms other classical algorithms in terms of training time and accuracy. Ke He [39] applied SPP with CNN. The effectiveness was tested both with grayscale and RGB images. SPP was introduced to handle varying input sizes. This method required a lot of memory. Hence, the divide and conquer method was used. The larger image was divided into smaller subimages and checked to be malicious or not. It was found that using RGB images with a resnet50 network without SPP performed best in terms of classification. The inclusion of SPP in CNN does not give improved results in terms of accuracy and needs alternate methods. Table 3.5 describes the parameters used in various studies and their values. From the table, it was found that a convolution network with two convolution layers and one densely connected layer (CNN 2C, 1D) can detect malware files with 99.76% accuracy.

Behavioral malware detection and classification using deep learning approaches Chapter | 3


TABLE 3.5 Comparative analysis of CNN approaches. Parameters

Different approaches Gilbert et al. [8]

Sharma et al. [9]

Ke He [10]


Microsoft BIG Malware Dataset

Microsoft BIG Malware Dataset

Andro-Dumpsys study, Korea University




CNN 1C 1D. Accuracy: 98.57% CNN 2C 1D. Accuracy: 99.76% CNN 3C 2D. Accuracy: 99.38%






TF-IDF 1 Ngram 93.7% Proposed model 99.2%

TF-IDF 1 Ngram 100% Proposed model 99%


TF-IDF 1 Ngram 91% Proposed model 97%




Grayscale plain no SPP 95% RGB resnet SPP 6.25% Grayscale plain no SPP 62% RGB resnet SPP 6.6%

Challenges and future research directions

Various challenges were faced by researchers while applying CNN for malware detection. The following points give a prelude to the challenges. G







A good and large dataset for training the CNN is necessary and to avoid class imbalance [40]. Due to class imbalance, the algorithm may classify most of the malware as the majority class even if they belong to the minority class. Accuracy could not be considered since it might mislead the result. So it is better to follow other metrics like precision, recall, and F1 score. Even the receiver operating characteristic curve can be used to find out the discrimination. Owing to the lack of a benchmarked dataset, researchers are forced to use the only available Microsoft BIG Malware Dataset. Unluckily, the headers are not included in the byte code. It is difficult to examine the executable dynamically. Attackers try to use complicated techniques to revert malware detection. So, adversarial techniques are to be introduced in malware detection to surface new malware. While converting the malware files into images, only grayscale or RGB images are used. Instead, more color channels could be used to improve the accuracy of the detection. During the training of the CNN, the images can be flipped, cropped, or resized, and redundant data can be added at a random position for flexibility. The complicated and packed nature of malware reduces the effectiveness of static analysis. Extracting static features in a dynamic environment is suitable for malware detection. The cybersecurity analyst prefers rule-based or signature-based systems for malware detection rather than the neural-based models, as they are easy to diagnose any problem that may arise. Hence research is needed in the area of the interoperability of models in malware classification and detection [41].



Advancements in technologies have driven various innovations in the integration of IoT, which results in smart devices pertaining to several applications. Therefore there is a likelihood of the occurrence of vulnerabilities and thus malevolent users to conquer the system. This chapter provides a detailed overview of digital forensics and explains the evolution of malware. The analysis of malware could be done in various ways. A detailed review of the static and dynamic analysis of malware has been presented. Besides, the exploration of the deployment of CNNs for malware analysis is done. It was found that a convolutional network with two convolution layers and one densely connected layer can detect malware files with 99.8% accuracy. Finally, this chapter provides deep insight into the research challenges that might drive researchers to focus their attention on the integration of various techniques.

References [1] M. Ratnayake, Z. Obertova, P.M. Gabriel, H. Broker, M. Braukmann, Barkus, The juvenile face as a suitable age indicator in child pornography cases: a pilot study on the reliability of automated and visual estimation approaches, International Journal of Legal Medicine 128 (5) (2014) 803808.


Applications of Computational Intelligence in Multi-Disciplinary Research

[2] M. Scanlon, Battling the digital forensic backlog through data deduplication, in: 2016 Sixth International Conference on Innovative Computing Technology (INTECH), 2016, pp. 1014. [3] H. Haddad Pajouh, A. Dehghantanha, R. Khayami, K.-K.R. Choo, A deep Recurrent Neural Network based approach for Internet of Things malware threat hunting, Future Generation Computer Systems 85 (2018) 8896. [4] A. Bedi, N. Pandey, S.K. Khatri, Analysis of detection and prevention of malware in cloud computing environment, in: Amity International Conference on Artificial Intelligence (AICAI), Dubai, United Arab Emirates, 2019, pp. 46, 918921. [5] M. Nikola, A. Dehghantanha, K.-K.R. Choo, Machine learning aided Android malware classification, Computers & Electrical Engineering 61 (2017) 266274. [6] S. Shen, L. Huang, H. Zhou, S. Yu, E. Fan, Q. Cao, Multistage signaling game-based optimal detection strategies for suppressing malware diffusion in fog-cloud-based IoT networks, IEEE Internet of Things Journal 5 (2) (2018) 10431054. [7] R. Smita, S. Hiray, Comparative analysis of feature extraction methods of malware detection, International Journal of Computers and Applications 120 (5) (2015). [8] M. Ahmadi, U. Dmitry, S. Stanislav, T. Mikhail, G. Giorgio, Novel feature extraction, selection and fusion for effective malware family classification, in: Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy, 2016, pp. 183194. [9] Y. Ye, T. Li, D. Adjeroh, S.S. Iyengar, A survey on malware detection using data mining techniques, ACM Computing Surveys 50 (3) (2017) 140. [10] M. Shafi, S. Tabish, M. Farooq, Pe-probe: leveraging packer detection and structural information to detect malicious portable executables, in: Proceedings of the Virus Bulletin Conference (VB), 2009, pp. 2933. [11] M.G. Schultz, E. Eskin, F. Zadok. Data mining methods for detection of new malicious executables, in: Proc. of the 22nd IEEE Symposium on Security and Privacy, 2001. [12] J.Z. Kolter, M.A. Maloof, Learning to detect and classify malicious executables in the wild, Journal of Machine Learning Research 6 (2006) 27212744. [13] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, B. Scholkopf, Learning with local and global consistency, in: Advances in Neural Information Processing Systems 16: Proceedings of the 2003, 2003. [14] Ashu Sharma, Sanjay K. Sahay, An effective approach for classification of advanced malware with high accuracy, International Journal of Security and its Applications 10 (4) (2016) 249266. [15] A. Nappa, M.Z. Rafique, J. Caballero, Driving in the cloud: an analysis of drive-by download operations and abuse reporting, in: Proceedings of the Detection of Intrusions and Malware, and Vulnerability Assessment, Springer Berlin Heidelberg, 2013, pp. 120. [16] J. Canto, M. Dacier, E. Kirda, C. Leita, Large scale malware collection: lessons learned, in: Proceedings of the 27th International Symposium on Reliable Distributed Systems and Experiment Measurements on Resilience of Distributed Computing Systems, 2008. [17] L. Chen, Deep transfer learning for static malware classification., 2018. [18] C. Liangboonprakong, S. Ohm, Classification of malware families based on n-grams sequential pattern features, in: 2013 IEEE 8th Conference on Industrial Electronics and Applications (ICIEA), IEEE, 2013, pp. 777782. [19] P.V. Shijo, A. Salim, Integrated static and dynamic analysis for malware detection, Procedia Computer Science 46 (2015) 804811. [20] Z. Salehi, M. Ghiasi, A. Sami, A miner for malware detection based on API function calls and their arguments, in: The 16th CSI International Symposium on Artificial Intelligence and Signal Processing (AISP 2012), IEEE, 2012, pp. 563568. [21] O.E. David, N.S. Netanyahu, Deepsign: deep learning for automatic malware signature generation and classification, in: 2015 IJCNN, IEEE, 2015, pp. 18. [22] B. Kolosnjaji, A. Zarras, G. Webster, C. Eckert, Deep learning for classification of malware system call sequences, Australasian Joint Conference on Artificial Intelligence, Springer, 2016, pp. 137149. [23] J.-M. Roberts, Virus share., 2015. [24] K. Maxwell, Maltrieve., April 2015. [25] Z. Zhang, P. Qi, W. Wang, Dynamic malware analysis with feature engineering and feature learning, in: AAAI, 2020. [26] C. Jindal, C. Salls, H. Aghakhani, K.R. Long, C. Kru¨gel, G. Vigna, Neurlux: dynamic malware analysis without feature engineering, in: Proceedings of the 35th Annual Computer Security Applications Conference, 2019. [27] R. Islam, R. Tian, L.M. Batten, S. Versteeg, Classification of malware based on integrated static and dynamic features, Journal of Network and Computer Applications 36 (2) (2013) 646656. [28] M. Ijaz, M. Hanif Durad, M. Ismail, Static and dynamic malware analysis using machine learning, in: 2019 16th International Bhurban Conference on Applied Sciences and Technology (IBCAST), IEEE, 2019, pp. 687691. [29] Y.S. Yen, Z.W. Chen, Y.R. Guo, M.C. Chen, Integration of static and dynamic analysis for malware family classification with composite neural network. arXiv preprint arXiv:1912.11249, 2019. [30] W.-C. Huang, F. Di Troia, M. Stamp, Robust hashing for image-based malware classification, 2018, pp. 451459. 0006942204510459. [31] T. Poongodi, D. Sumathi, P. Suresh, B. Balusamy, Deep Learning Techniques for Electronic Health Record (EHR) Analysis, Bio-inspired Neurocomputing, Studies in Computational Intelligence, Springer, 2020, pp. 73103. [32] A. Corum, D. Jenkins, J. Zheng, Robust PDF malware detection with image visualization and processing techniques, in: 2nd International Conference on Data Intelligence and Security (ICDIS), South Padre Island, TX, 2019, pp. 108114, [33] K.S. Han, K.B. Joong, I.E. Gyu, Malware analysis using visualized image matrices, Scientific World Journal (2014) 132713. Available from:

Behavioral malware detection and classification using deep learning approaches Chapter | 3


[34] L. Liu, B.S. Wang, B. Yu, Qiu-xi, Q.X. Zhong, Automatic malware classification and new malware detection using machine learning, Frontiers of Information Technology & Electronic Engineering 18 (9) (2017) 13361347. [35] A. Makandar, A. Patrot, Malware class recognition using image processing techniques, in: International Conference on Data Management, Analytics and Innovation, IEEE, 2017, pp. 7680. [36] D. Xue, J. Li, W. Wu, Q. Tian, J. Wang, Homology analysis of malware based on ensemble learning and multi features, PLoS One 14 (8) (2019) e0211373. Available from: [37] D. Gibert, Convolutional Neural Networks for Malware Classification, University RoviraiVirgili, Tarragona, Spain, 2016. [38] A. Sharma, M. Pasquale, M.H.R. Khouzani, Malware detection using 1-dimensional convolutional neural networks, in: 2019 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), IEEE, 2019, pp. 247256. [39] K. He, K. Dong-Seong, Malware detection with malware images using deep learning techniques, in: 2019 18th IEEE International Conference on Trust, Security and Privacy in Computing and Communications/13th IEEE International Conference on Big Data Science and Engineering (TrustCom/BigDataSE), IEEE, 2019, pp. 95102. [40] D. Gibert, C. Mateu, J. Planes, The rise of machine learning for detection and classification of malware: research developments, trends and challenges, Journal of Network and Computer Applications 153 (2020) 102526. [41] T. Poongodi, M. Karthikeyan, D. Sumathi, Mitigating cooperative black hole attack by dynamic defense intrusion detection scheme in mobile ad hoc network, Asian Journal of Information Technology 15 (23) (2016) 48904899.

This page intentionally left blank

Chapter 4

Optimization techniques and computational intelligence with emerging trends in cloud computing and Internet of Things Jayesh S Vasudeva1, Sakshi Bhargava2 and Deepak Kumar Sharma3 1

Department of Instrumentation and Control Engineering, Netaji Subash University of Technology (formerly known as Netaji Subhas Institute of

Technology), New Delhi, India, 2Department of Physical Sciences and Engineering, Banasthali Vidyapith, Tonk, India, 3Department of Information Technology, Netaji Subhas University of Technology (formerly known as Netaji Subhas Institute of Technology), New Delhi, India



In an ecosphere full of digital novelty, technologies such as the Internet of Things (IoT), 5G wireless networking, and even embedded artificial intelligence continue to add to the pace of alteration, and with these swift variations, billions of original claims and devices are approaching online work to monitor, progress on, measure, analyze and respond to apparently boundless streams of data. Initially, data was held in the cloud without help. But what is the cloud? Cloud computing is the distribution of computing amenities, from requests to storage and handing out power across the internet, to help users receive services that are way cheaper than physical/offline systems. With the help of the cloud, users can progress, innovate, and build without being limited by the cap on their resources, and most of these services are pay-as-you-go models (that means one needs to pay only for the used services, and if services are disabled, they will no longer be required to be paid for). The size of the ocean of data was too much to handle for the cloud. There was a need for an innovative architecture built for the resolution of latency, unreachability, increasing cost, and subtle data security concerns. That is where fog/ edge computing comes into the picture, fog is a distributed network, one that extends the continuum between the cloud and everything else. This permits the dispersal of critical core purposes, namely computation, communication, regulation, storage, and conclusion, making it more rapidly to where data is initiated. That is why fog is not just a communal architecture but also a vivacious network for situations where latency, confidentiality, and other data-concentrated subjects are producing a reason for apprehension. One might ask, what is the motivation behind optimization in fog? Before answering that, there is a need to understand what optimization is. Optimization is the procurement of the best outcome under a given condition. Mathematically, it includes the maximization or minimization of functions within the given constraints Optimization makes it possible to apply theoretical mathematics on a wider scale. It helps us prepare a course of action to best utilize our resources. Nowadays, optimization is practically everywhere: time management, industrial planning, etc. Its origin can be dated back to 300 BC when Euclid recognized the nominal distance between two points to be the dimension of a straight line linking the two and proved that a square has the maximum area among the rectangles with a fixed perimeter. During 100 BC, Heron Von Alexandria in his book Mechanica et Catoprica [1] demonstrated that light traverses between two points through the path with the shortest length when reflecting from a mirror. Its introduction in the industry can be traced back to 1615 when J. Kepler came up with the ideal scopes of a wine barrel. He also formulated a primary form of the secretary problem [2] when he began looking for a new wife. After World War II, optimization developed as operations research headed by J. von Neumann. The field of algorithmic Applications of Computational Intelligence in Multi-Disciplinary Research. DOI: © 2022 Elsevier Inc. All rights reserved.



Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 4.1 Fog architecture.

research expanded as electronic calculation developed. It was important to wisely manage millions of men, machines, and ammunitions; apart from that, practical optimization was also used to search for the best bombing sites and patterns for antisubmarine patrols. In this chapter, we discuss the optimization problems and the techniques to solve those problems. We will highlight the optimization problem for emerging cloud technologies (e.g., fog computing), as it is believed that teaching with an example helps to better understand the concept. Therefore we base the working of optimization techniques and their uses on the fog resources (computational power, storage, network, etc.) management.

4.1.1 Introduction to optimization Optimization is diverse in nature; it is not limited to a single field and is found to be applied in problems involving quantitive discipline, from computer science and other engineering fields to resource management and economics. Moreover, most decision-making procedures involve the explicit solution of an optimization problem to make the “best” choice. In addition to their role per se, optimization problems arise as critical subproblems within other processes, such as the one we will further discuss in the chapter. By now, we are clear with the idea that while solving an optimization problem, our aim is to find the maximum or minimum of an objective function (which might not be true in case of a feasibility problem1 or no objective function). On a side note, instead of specifying the minima or maxima every time we come up with a solution to the problem we will use “optima” or “optimum.”

4.1.2 Introduction to cloud computing with emphasis on fog/edge computing Fog computing was first introduced by Cisco [3] in 2012. It is the other version of cloud computing nearer to edge devices but not as powerful as cloud computing. One can think of it like an ice-cream truck: closer to its customers, but it probably will not have as many flavors as in the parlor. Like an ice-cream truck, fog can provide easy access to services (computational power, storage, software, etc.) to its customers. The fog architecture (Fig. 4.1) illustrates the introduction of a new layer that is responsible for solving latency-sensitive computing problems. It can also be defined as a wirelessly distributed computing platform in which complex latency-sensitive tasks can be processed via a group of shared resources at the IoT gateway level in a locality [4]. 1. In this case, the goal is to find a set of variables that satisfy the constraints. There is no particular agenda to optimize something and hence no reason to define an objective function, e.g., design of integrated circuit layouts.

Optimization techniques and computational intelligence Chapter | 4


The main goal of this chapter is to help the reader understand: G G G G G

Applying optimization in practical problems Different optimization techniques The key concepts behind fog computing Fog resource management as an optimization problem Development so far in the field of fog optimization.

The rest of the chapter is organized as follows. In the following section, a detailed explanation of optimization techniques has been stated with subcategories such as what an optimization problem is, what elements are included in it, the classifications, and the types of solutions that we can propose for an optimization solution. Section 4.3 includes the concepts of fog/ edge computing, and it sets the framework for the technologies to link with each other. Section 4.4 highlights the optimization of fog resources that can be included in the model while preparing a similar system. Section 4.5 presents a couple of case studies based on the principles of optimization techniques and some of the famous practical problems in optimization. Finally, Section 4.6 includes significant points highlighting the scope of progressions that can be made.2


Optimization techniques

Optimization is an important step when it comes to increasing the efficiency or accuracy of a system. Moreover, the center of focus is minimizing the cost that has to be spent on production. In modern trends, it can be significantly noticed that changes are the only constant thing in the growing market and technological sector. Only if a technology is upgraded in a better way (i.e., it becomes more efficient and cost effective with time) can the needs of the people be met. While a product is designed or a service is planned, numerous optimization techniques are adopted so as to deliver the best performance and make it is easy to use. While talking about the optimization of a design, the objective can be merely to minimize the cost of creation or to improve the efficacy of the creation. An optimization technique is a process that is implemented repetitively by equating numerous clarifications till the finest or an acceptable solution is established. With different fields like design [5], data science (machine/deep learning) [6], engineering (civil, mechanical, etc.), and many other fields, optimization is gaining importance, as the motive of the changing world is to make the existing technologies fit in the modern scenarios, which is a task in itself. Knowing about the importance of the subject and growing technology in every sector, a proper technique has to be found and implemented for achieving the desired results. Optimization techniques acquaint with the nitty-gritty of all the frequently used methods in optimization that incorporate the wideness and assortment of both the traditional as well as the new methods and procedures. Optimization in itself covers a varied series of subjects consisting of mathematical foundations, optimization creation, optimality circumstances, algorithmic complexity, linear programming, convex optimization, and integer programming [7]. Optimization complications are a lot more stimulating to resolve. To unravel such difficulties, different and effective optimization techniques have to be used; mainly, the complications have to be resolved by trial and error method, many times when different situations related to optimization are encountered. In addition to this, new algorithms are developed each time a problem is faced so as to see if the proposed algorithm can hack it with these stimulating optimization difficulties.

4.2.1 An optimization problem An optimization problem is the reason for finding the best possible solution to a problem. Problems can be of varied fields, from technology to design, and every problem has to have a solution to it; either the solution is nature inspired, or it is altogether the hit and trial method that helps to find an accidental solution to a problem. Whenever a solution is proposed, it has to be made sure that it lies within the feasible region so that the customers can easily make use of the product that is a solution to their major problem. The solution has to lie in feasible regions, and every region has a maximum and minimum value. Whenever a problem is decided to be dealt with, it is kept in mind that the solution that has to be proposed should be the best one so that the domain it has to be applied to the current scenario. An optimization problem can be a simple one or can be a set of complex problems depending on the domain it has to be applied to. 2. We have also provided a case study to further demonstrate the practical applications of optimization; this will help the reader to establish a diverse mindset while tackling practical optimization problems like those mentioned in the case study.


Applications of Computational Intelligence in Multi-Disciplinary Research Defining an optimization problem For actually defining an optimization problem, the first step is to understand what variables or parameters the problem holds while a problem has to be dealt with. Numerous significant points are to be kept in mind regarding the boundaries that are set for a particular problem, the area of restriction that has been proclaimed, and finally, keeping all the constraints related to the idea as the steps are planned. While dealing with optimization problems, all the difficult real-time situations are to be articulated. Such complications comprehend the description of the optimum of an exponentially vast set of explanations. A state like this can be linked with searching a needle in a haystack [8]. The obvious method thought to be related to the problem requires an explicit explanation as well as an ample amount of time because each solution can have different results that can further lead to different challenges. An optimization problem is detailed by highlighting instances, constraints, solutions, and the cost. Here instances comprise the possible inputs that a problem can have; they have an exponentially huge set of explanations. For the solution to be acceptable, the criteria set for the problem are to be met. Each result has an acceptable total cost and a degree of achievement that is to be minimized or maximized [8]. Elements of an optimization problem Every optimization problem has a few elements that help in defining the problem and shaping it at the same time. Each and every optimization problem can be broken down into three elements: objective, constraints, and box. 1. Objective is the primary component of the elements in any optimization problem that defines the eminence of a feasible answer to the problem. Depending on whether a problem is a minimization or a maximization problem, it has to be minimized or maximized. The objective has the following general form: -


λ1 F 1 ð x Þ 1    1 λK F F ð x Þ -



where F1 ð x Þ;   ; Fk x are some functions and λ1 ;   ; λK are some coefficients (i.e., some numbers) [9]. 2. Constraint is an element of the optimization problem highlighting the boundaries of restrictions on decision variables. The problem may optionally comprise of one or more constraints. Feasible variables have to fulfill the conditions of all constraints encompassed in the problem. They may have the following overall form: -


Lb # λ1 F1 ð x Þ 1    1 λK FF ð x Þ # Ub -



where F1 ð x Þ; . . .; Fk x are some functions and λ1 ; . . .; λK are some coefficients (i.e., some numbers), and Lb ; Ub represent the lower bound and upper bound on constraints, respectively. G If Lb 5 2N and Ub , 1N, then such constraints are called constraints with upper bound [10]. G If Lb . 2N and Ub 51N, then such constraints are called constraints with lower bound [10]. G If Lb . 2N and Ub , 1N, then such constraints are called constraints with lower and upper bounds [10]. G If Lb 5 2N and Ub 51N, then such constraints are inactive and can be removed from the problem [10]. 3. Box (of variables) explains lower and upper bounds on the decision variables and types of decision variables (real, boolean, integer). -

General problems include the following bounds on decision vector x : -



l # x # u ; ½box constraints on decision vector





x AR or x A0; 1 or x AZ ; ½real or boolean or integer







where l and u are lower and upper bounds, respectively, on the components of point x [11]. Classification of the optimization problem According to the topics covered before, we have already set the base for one of the most critical steps, which is findingwhich type of optimization problem we are dealing with. By now, we know that without mathematical programming, the concept of optimization cannot be highlighted. There are a variety of factors that contribute towards classifying an

Optimization techniques and computational intelligence Chapter | 4


optimization problem. Solutions to these optimisation problems also differ for each classification. Hence it’s imperative to suitably bin an optimisation problem to a particular class.

On the basis of types of constraints

Constrained optimization: While considering constrained optimization, the main focus is on the effective formulation of the problem and then finding specific solving strategies for the same. Constrained optimization is the one in which there is more than one constraint. Some of the constrained optimization examples related to machine learning and artificial intelligence are projected gradient descent and interior points. Unconstrained optimization: Unconstrained optimization complications ascend openly in certain real-life problems but can be a part of reformulations of the constrained optimization problems. Frequently, it is useful to substitute the constrictions of an optimization problem with reprimanded terms. In layman’s terms, unconstrained optimization is the one in which no constraint is present. Examples include methods like Newton’s method and the trust region method. On the basis of the physical structure of the problem Optimal control problems: These problems are mathematics-based programming problems that involve evolution at different stages of the problem. A prescribed method is used for evolving solutions at each stage. Control and stage variables are the basic types involved in optimal control problems, where the control variable is in charge of controlling as well as evolving the stage each time it comes into the picture, whereas state variables keep note of the status on each stage of the problem. The control and status variables are a crucial part of this kind of problem as they will also help the system minimize the total objective function for all the stages. Nonoptimal control problems: The problems where we have no track of control and status variables and where we do not require any minimization in their objective functions are known as nonoptimal control problems.

On the basis of the nature of the design variables

The foremost grouping is based on the target that is to discover a set of design constraints that will make the arranged functions reduce to or increase to a certain level based on the parameters chosen. In such a problem, some parameters are fixed, and the result is obtained based on the remaining parameters. The result is obtained based on the parameters present in the problem, and such a problem is known as a parameter-based or a static problem. The second type is more focused on dealing with the type of design constraints that are continuous in nature and are derived from some other parameters present in the problem and also focus on minimizing the objective function. These types of problems are the ones where the design variable is always a function of one or more than one type of parameter, and such a category is called dynamic or trajectory optimization.

On the basis of the nature of the equations (constraints and objective functions)

Linear programming: When all the objective functions, as well as their constraints, are linear in function based on the design variables that are present in the optimization problem, then such a type is known as linear programming type. Nonlinear programming: When the objective functions, as well as the constraints, are nonlinear in function, then such a problem is termed a nonlinear programming optimization problem. It is defined as the general type of optimization programming problem, and all the other types are special cases of nonlinear programming problems. Geometric optimization programming: When the objective functions, as well as their constraints, are present in the form of a polynomial in the function, then such a problem is known as a geometric optimization programming problem. On the basis of the separable nature of the variables Separable: When we can easily separate the objective function and the constraints in a problem, it is said to be a separable optimization problem. Any function can only be separated when it is demonstrated as a sum of n number of single variable-based functions. Nonseparable: When we cannot easily separate the objective function and the constraints in a problem, it is said to be a nonseparable optimization problem.


Applications of Computational Intelligence in Multi-Disciplinary Research

On the basis of the deterministic nature of the variables

Stochastic optimization: When some or all the types of design variables can be probabilistically approved, such a type of optimization problem is known as a stochastic optimization problem. It is used in a variety of problems for checking the lifespan or stability of a system. Deterministic optimization: when the design variables can be easily determined, and there is no need to calculate the probability for the systems, then it is known as a deterministic optimization problem. On the basis of the permissible values of the decision variables Integer optimization: When the decision variables are forced to choose integers as their values, then such a problem is known as an integer optimization programming problem. Such a problem is used to find the number of articles that would be needed to build an operation with less effort, and in this type, only integer values can be taken. Real optimization: When the decision variables are allowed to choose the real values from the set provided for minimizing or maximizing the function, then such a type is known as a real optimization problem. The set provided contains real numbers only, which is why it is named real programming.

On the basis of the number of objectives

Single-objective optimization: As the name suggests, when there is only a single-objective function, then that problem is known as a single-objective optimization problem. Multiple-objective optimization: As the name suggests, when there is more than one objective function in the problem, it is known as a multiple-objective optimization problem.

4.2.2 Solution to the optimization problem Classical optimization techniques One of the suitable methods from the pitch optimization is to be carefully chosen for resolving the problem, and classical optimization techniques are one of the types of solutions that can be provided while dealing with such problems. They are highly beneficial in the discovery of the finest answer or unrestricted maxima or minima of unceasing and differentiable roles [12]. These are logical approaches, and they make use of differential calculus for finding the best result. But the classical approaches have inadequate opportunities to be used in real-world submissions as few of them include objective functions that are noncontinuous or nondifferentiable in nature. Until now, the learning of the classical approaches of optimization has set up the foundation for the development of most of the mathematical systems that have progressed into better advanced procedures and are able to handle real-life problems based on today’s situations. These approaches assume the function to be differentiable, that is, double with regard to the design variables present in the problem, also keeping the derivatives continuous. There are certain forms of problems that can be effortlessly solved by classical optimization methods, which are solitary variable functions, multivariable functions with no constraints, and multivariable functions within the cooperation of equality and inequality constraints [13]. For difficulties with the equivalence of limits, the Lagrange multiplier method is mostly used [12]. If there are issues with inequality constraints, then KuhnTucker conditions [13] can be used to recognize the best result. These approaches guide us toward the set of nonlinear simultaneous equations [13] that can be a bit more problematic to resolve. The additional approaches for such an optimization technique include linear programming, nonlinear programming, integer programming, and stochastic and deterministic programming. Advanced optimization techniques The classical methods of optimization are the basic stage for letting the advanced methods develop over them. The problems that are dealt with in our industry are becoming challenging day by day; hence we need an advanced system to develop our optimization problems. The types of advanced techniques that help in optimization are given as follows: 1. Hill climbing (HC): It is a graph exploration algorithm where the present track is prolonged with a next upcoming node that is closer to the result. The simple HC technique chooses the initial node, which is closer to another node, while in the precipitous ascent HC technique, all the upcoming nodes are equated, and the node that is near the answer is selected. If there is no node closer to another node, then both the forms of HC will fail. This might occur if there are local maxima in the exploration area that are not its solutions. HC is extensively used in artificial intelligence, and it helps in the accomplishment of a goal.

Optimization techniques and computational intelligence Chapter | 4


2. Simulated annealing (SA): In the primary classes, we have covered a process called annealing while studying metallurgy. Annealing is the process that involves heating and cooling the material. This helps in increasing the crystal size, and this also decreases the defects in the crystals. The warmth that is provided to the atoms makes them leave their preliminary sites in a similar way the concept of local minimum of the internal energy is taken from this concept of annealing. The atoms can move arbitrarily through the conditions of complex energy states; the unhurried conserving increases the probability of finding conformations with lower interior energy than what it had at the initial level. The method of SA comprises individual points of the exploration space, which is then associated with a state of physical organization, and the purpose is to minimize and is understood as the inner energy of the arrangement in that state. Consequently, the target is to fetch the organization from a random preliminary state to a state where there is the least conceivable energy. The algorithm (Algorithm 4.1) can be boiled down to the following four steps: a. The restrictions of the algorithm, mainly associated with the primary random set of variables, cooling rate, and preliminary temperature, are prepared, and the charge of the primary set of variables is premeditated. b. The cost is valued by selecting the random set of variables. c. The improvement in a new set of variables is the deciding factor for the acceptance of the variables. For the case ̈ in which there is no improvement, it is taken with the probability of ea =T , where ä signifies the cost of transformation and T is the present temperature. d. In case the algorithm is standardized, the steps that are followed when the maximum quantity of persistent temperature recurrences is reached are: (1) the temperature is reduced conferring to the freezing agenda, and the procedure remains linked to the subsequent step; and (2) if the algorithm is inhomogeneous, the temperature is reduced, conferring to the cooling plan. e. The procedure recurs from step (b) until a satisfactory explanation is initiated or the maximum quantity of repetitions is reached. 3. Genetic algorithms (GAs): It is a limited search technique that is used to search for estimated answers to the optimization as well as the search difficulties. GA problems are a specific class of evolutionary algorithms that use methods stimulated by biological terms, namely inheritance, mutation, and assortment. GAs are characteristically applied as computer simulation algorithms that handle the inhabitants of abstract illustrations known as chromosomes of applicant explanations known to be individuals, and they progress toward improved results. The development begins from a population of entirely haphazard entities and happens in generations. Each generation has the appropriateness of the entire populace assessed; numerous individuals are probabilistically chosen from the present population and improved to form a novel populace. The new populace helps in the subsequent repetition of the algorithm. There are other advanced techniques that are used in the industries, and many more are under research. GA (Algorithm 4.2) can be described using the following five steps: initialize (temperature T, random starting point) while iter # MaxIteration do cool_iteration 5 cool_iteration 1 1 temp_iteration 5 0 while temp_iteration # nrep do temp_iteration 5 temp_iteration 1 1 point p 5 choose an original point from the nearby locality compute current_cost(p) end while end while 1. The quality of the potential solution is indicated with the help of the fitness function defined for a problem. 2. A populace of chromosomes is prepared based on specific constraints. Every chromosome is coded, considering it as a vector that is of fixed length, with the values taken from alleles. Upon following the binary encoding, the length results from the anticipated grade of accuracy. 3. Strings are given a range of probability that is responsible for the selection of the fitness of a string. 4. A fresh molded populace of strings is now approved through the genetic operatives of crossover and mutation. Crossovers hold two strings that can exchange one or more parts in erratically selected locations. Whereas in mutation, an arbitrary gene in a string is overturned. These genetic operatives happen with probabilities. The range of probability for a crossover is from 0.6 to 0.95, and that of a mutation is from 0.001 to 0.01. 5. The procedure stops if an acceptable answer is reached or when the predefined number of maximum generations is reached. Otherwise, it will go back to step 3, and the cycle will repeat itself again.


Applications of Computational Intelligence in Multi-Disciplinary Research

Algorithm 4.1: Genetic algorithm [14]. Generate an initial random algorithm iter # MaxIteration do


iter 5 iter 1 1 evaluate the fitness of each individual select the individuals according to their fitness perform crossover with probability pc perform mutation with probability pm population 5 selected individuals after crossover and mutation end while


Understanding fog/edge computing

4.3.1 What is fog? As we have introduced before, fog is the upcoming mainstream replacement of cloud services for edge devices, which was initially introduced by Cisco [3]. The geographical diversity and spread of the network make it convenient for edge applications to compute, store, and network efficiently. The idea behind fog is the decentralization of the hub of resources into myriad stations closer to edge devices with similar but downscaled features of the cloud. The fog network (Fig. 4.1) is not only limited to distributing computational power, storage, etc.; it can also distribute electricity. Let us try and understand fog with the help of an example: How can we decentralize power stations? Let us say a person named “Rahul” has installed solar panels at his place. These solar panels produce more energy than is needed for him, so he decides to share the excess power generated with his neighbors. For that, Rahul has to send back the excess power to the power grid, which then forwards the power to Rahul’s neighbors. With fog, Rahul will be able to share the excess power with his neighbors directly and ergonomically. As we know, fog computing promises better computational capabilities and accelerates the execution of the tasks without significantly affecting battery consumption. Therefore optimizing the battery performance and execution time is the fundamental cause behind optimizing the fog model.

4.3.2 Prelude to our framework With this section, we try to generalize the optimal utilization of the assets to produce the maximum. A small but considerable part of the literature has defined the problem informally and devised algorithms to solve the problem. We condemn this approach to solve optimization problems. One should be thorough with the choice of the objective function and the evaluated constraints, domains, and variables associated with the problem. A formal definition of the problem helps us evaluate the trade-off between the generalization and the applicability of the dignified problem on one hand and its complexity and computational pragmatism on the other [15]. Naturally, deriving a formalized optimization delinquent from a real-world problem is a nontrivial progression. Note: Formalizing an optimization problem can be brooding at times; therefore nonessential details can be neglected or approximated for simplicity. A quick recap of the definition of an optimization problem: G G G

List of variables: X 5 ½x1 ; x2   ; xn  The domain. that is, the set of values each variable can assume: domain of xi is represented as Di : Constraints: gi ðXÞ # 0; 5 1; 2;   ; m hj ðXÞ 5 0; 5 1; 2;   ; p where gi ðXÞ and hj ðXÞ represent inequality and equality constraints, respectively; m is the number of inequality constraints and p is the number of equality constraints. Equality and inequality constraints may or may not be related.


Objective function: f :D1 3 D2 3    3 Dn -R.

Optimization techniques and computational intelligence Chapter | 4


Our goal is to find out the appropriate values V 5 ½v1 ; v2 . . .; vn  of the variables, such that the following holds true: 1. vi ADi ; for each i 5 1; 2; 3;  ; n 2. gi ðVÞ # 0 ; for each i 5 1; 2; 3;. . .; m and hj ðVÞ 5 0 ; for each i 5 1; 2; 3;  ; p 3. f ðVÞ is the maximum among all tuples V 5 ðv1 ; v2 ;   ; vn Þ that satisfy (1) and (2). The tuple obtained that satisfies all the aforementioned conditions is called the solution of the problem. Note: In the case of a minimization problem, we maximize the objective function f 0 5 2 f Balancing multiple-objective functions in practical optimization problems such as this is quite cumbersome. Let the objective functions be f1 ; f2 ;   ; fq , where the aim is to maximize all of them. Since there is no optimization technique that maximizes all of the objective functions simultaneously, in order to obtain the most maximized outputs, we use one of the following approaches: Initialize a function fcombined ðVÞ 5 Fðf1 ðVÞ; f2 ðVÞ;   ; fn ðVÞÞ such that we obtain a combined function of the objective functions. Then our goal becomes to maximize the combined function. FðÞ is commonly a product or weighted sum of the previous objective functions. G

Pareto-optimal solution: A solution V 0 AD is Pareto-optimal iff (i.e., if and only if) there does not exist another solution VAD such that fi ðV 0 Þ $ fi ðVÞ; ’ ðfor allÞ i 5 1; 2;   ; n and fi ðV 0 Þ . fi ðVÞ for at least one i. A solution is Paretooptimal if it is not dominated by any other solution. In other words, a Pareto-optimal explanation can only be enhanced with reverence to an objective if it is declined with respect to the added objective. Note: Different Pareto-optimal solutions represent various configurations of trade-offs between the objectives.


Converting all but one of the objective functions to the constraints by setting a lower bound of the form: fi ðVÞ $ li ; where li is the lower bound for the it h objective, where i 5 1; 2; 3;   ; q 2 1 and maximizing fq ðVÞ:

4.3.3 Our goal Since we now understand the fundamentals of optimization, we can go ahead and define the optimization problem with fog computing. Our aim is to reduce latency, energy consumption, and financial costs and improve the quality of service. In the next section, “Framework for fog computing,” we will address the aforementioned problems. This will not only help a reader experience a multifaceted optimization problem, but it will also introduce them to a rather brief concept of fog computing.

4.3.4 Framework for fog computing G

Latency/delay time efficiency: The fundamental cause behind the introduction of fog is latency. The performance of the latency-sensitive applications can be improved if the latency is calculated optimally [4]. Naha et al. [16] divide the latency of each chunk of data into four parts: queuing delay Qd ; transmission delay Td ; propagation delay Pd , and processing delay PRd : The net latency or delay in the network can be written as: NdL 5 2ðQd 1 Td 1 Pd 1 PRd Þ


A link allows the to-and-fro flow of the data across the nodes. To find the total delay, it is important to take into consideration all the links and nodes. The latency for a path can be defined as NdP 5 QdP 1 TdP 1 PdP 1 PRdP


where NdP is the latency of a path. Propagation delay ðPdP Þ can be thought of as the sum of propagation delay across all intermediate nodes: PdP 5

k X

Pd n



where k is the number of intermediate links in the path. Similarly, we can define queuing delay QdP ; transmission delay TdP , and processing delay PRdP as: QdP 5

k X n51

Qdn ; TdP 5

k X n51

Tdn ; PRdP 5

k X n51




Applications of Computational Intelligence in Multi-Disciplinary Research

Propagation delay is the time consumed by the data to advance from one node to the other through a medium (wired, wireless). In other words, the traveling time of the package can be considered the propagation delay. After data processing, it is sent to the medium for transmission. The time required to transmit the data to the medium also causes latency. This delay is called transmission delay. Li et al. [17] define the processing delay as the ratio of the length of the frame (L) and the transmission rate ðTr Þ, which can be signified as: PRd 5 L=Tr


Latency due to time consumed during propagation and transmission can be considered constant for a given network. Eq. (4.5) can be rewritten as: NdL 5 Qd 1 PRdP 1

k X




where An is the constant representing transmission and propagation delay and k is the number of the intermediate links. Hence we only have to deal with latency due to queuing and transmission of data [16] . G

Energy/power efficiency: As we have defined the latency mathematically, it is important to formulate efficient distribution of workload across the devices as the power consumed by the device is proportional to the workload allotted to a particular fog node. Let us assume a set of fog devices F that has a cardinality of n devices. In other words, F 5 ff1 ; f2 ;   ; fn : Now, the query by the user is delegated to one or more nodes from the set F These queries are distributed throughout the nodes on the basis of the physical distance between the user and the fog device, computation cost of the task, and queued tasks at the device. As the workload is delegated to different nodes, there is a proportion of tasks that each node can accomplish; for example, a fog node fi can process αi of the tasks that is allotted to it, that is, if αi 5 1, that means that the fog node fi processes all of the allotted workload, and if αi 5 0:5, that means that the fog node fi processes only half of the allotted workload [18]. The rate at which workload is expected to be received is considered constant λi [18]. The response time associated with a fog node fi is denoted as Ri ðαi Þ: Maximum power efficiency can be achieved by utilizing minimum power for processing a unit of the workload. Total power delivered to the device (fog nodes) depends on three factors:




Power usage effectiveness (PUE): It is defined as the ratio of the input power received to the power consumed by the device. It is denoted as ei : Static power consumption: It is mainly caused by the loss of power due to the leakage of power/current across the devices. It is denoted as wSi : Dynamic power consumption: Power consumed by the devices is dynamic in nature, and it is often denoted as wD i : The net power consumed by the fog node fi per unit time can be written as [18]:   wi 5 ei wSi 1 wD (4.11) i αi λi The power efficiency ηi ðαi Þ can be formulated as [18]: ηi ðαi Þ 5

 S  wi wi 5 ei 1 wD i αi λi αi λi


As discussed before, the workload is delegated to different fog devices with different configurations (computational capacity, storage, etc.). The goal is to minimize the response time from each fog node fi such that the workload is processed with maximal power efficiency. Now, a fog node might resort to a nearby fog node for processing the given task. Considering this cooperation, the problem then changes to obtain the best average response time. Consider a vector α 5 , α1 ; α2 ; α3 ;   ; αn . ; representing the proportions of the task that can be completed by n fog nodes. The cumulative response time Rci ðαÞ can be formulated as [18]: P α  5 arg minα fi AF Rci ðαÞ (4.13) subjected to : ηi # ηi ; 0 # αi # 1; ’fi AF

Optimization techniques and computational intelligence Chapter | 4



Financial cost: Financial cost is determined by various factors, ranging from the amount of power consumed to the architecture adopted for the delivery of services. The financial costs also depend on the demographic spread of the devices, the location of the user, etc. Although, the user should be charged for each use, in other words, pay-peruse. Different services may have different costs; for example, task processing may result in the generation of different costs than transferring a query via fog nodes.3


Optimizing fog resources

So far, we have understood that fog resources can be optimized in several domains, a few of which have been discussed in Section 4.3.4. Fog computing is an active topic of research, and it is constantly evolving as per the use cases required. As we have stated numerous times, there are a plethora of applications of optimization in different domains, and fog invites its application over new domains. For example, Do et al. [19] aim to minimize carbon footprints in geodistributed fog computing. The highlight of this chapter is the problem of optimizing the resources for edge/fog devices to deliver the best performance, which is subjected to change for different applications of fog. These types of problems are real-life applications, and hence most of them are nonconvex and nonlinear in nature [in both constraints and objective function(s)]. The methods adopted to solve those problems are advanced and require a thorough understanding of the subject. In this section, we will discuss a couple of more optimization problems in fog computing that encloses various new domains.

4.4.1 Defining optimization problem for fog layer resources The fog network is spread geographically, and its deployment size in a region can be used to determine its demand within the region. Do et al. [19] aim to solve a multiple-objective optimization problem to solve the problem of joint resource allocation and minimizing carbon footprint. Their model consists of N fog computing nodes (FCNs), which are supposed to serve the end-users for video streaming. It is assumed that FCNs send the query to the data center generated by the user; the data center then processes the query, and it is served back to the FCN. The amount of video streamed to FCN n from the data center is denoted as xn : 1. Utility and cost function: Utility denotes the quotient of the satisfaction of the user based on the goods/service used (e.g., the amount of video streamed, in this case). An affine utility function is considered [19] at FCN n, which is represented as: Un ðxn Þ 5 αn xn


αn . 0 is the conversion factor for translating the streaming service into utility (e.g., revenue). αn varies as per the demand of the service regionally. The amount of carbon footprint generated due to energy consumption by the data center can be written as [19]: CðyÞ 5 c 3 r 3 PUE 3 PðyÞ


where G G G G

c: carbon footprint cost in $/g r: average carbon emission rate g/kWh PUE: power usage effectiveness (defined in Section 4.3.4) P(y): server power of the data center. The average carbon footprint emission can be calculated by using the weighted average of each fuel type: P j ej :rj (4.16) r5 P j ej

where ej is the energy generated by a fuel type j and rj is the carbon emission of the fuel type j. 3. Note: The presented framework/model is a toy model, which is only made to give the reader an idea about the practical problems. There are a plethora of works in the literature covering the optimization model of fog, which is beyond the scope of the chapter.


Applications of Computational Intelligence in Multi-Disciplinary Research

PðyÞ is a function of the total video stream requested y, which outputs the server power of the data center. PðyÞ 5 ξ 3 Pidle 1 ðPpeak 2 Pidle Þ 3 y 3 κ


where κ: conversion factor, to convert the streaming video to workload G ξ: workload capacity of the data center G Pidle : idle power of the server G Ppeak : peak power of the server. The problem can be formulated as: G

maxxn $ 0;yn $ 0 s:t:


N X n51

xn 5 y #


ξ κ

Un ðxn Þ 2 CðyÞ (4.18)

4.4.2 Optimization techniques used Most of the optimization problems in the real world require complex and advanced solutions, which further require a deep understanding of the subject and research. In this section, we discuss some of the hybrid optimization techniques that have been used to obtain the state-of-the-art [20]. One of the important elements while optimizing resources for fog is the energy/power. For example, Khanna et al. [21] use an optimization model to conserve in smart buildings. Khan et al. [14] use GA and earthworm optimization algorithm (EWA) for a similar application: energy management in smart grid. GA and EWA are used to minimize electricity cost and peak-to-average ratio (AVR). EWA is a bioinspired metaheuristic algorithm. EWA is inspired by the two types of reproduction methods that take place in earthworms. The offsprings produced from the two methods are independent. Weighted summation of all the earthworm individuals is used to calculate the final offspring for the generation given below (Algorithm 4.3). Reproduction 1 is able to produce only one offspring by itself (hermaphrodites). Although, it is to produce one or more offsprings for which extended versions of the traditional crossover techniques such as the one used in differential evolution and GA are used. Cauchy mutation is used to efficiently search for the global minima and avoid local minima. It is also used to improve the individual positions of each earthworm individual. The earthworm reproduction behavior can be translated to a collection of rules: G



All earthworms can reproduce only with two methods, and all the earthworms in the population possess the ability to reproduce. Every offspring individual produced from either of the reproduction methods will carry all the genes whose length is equal to that of the parent earthworm. To ensure that the earthworm population does not deteriorate over the increment of generations, the best individuals from the population are forwarded to the following generation without any changes due to operators.

Algorithm 4.2: Earthworm optimization algorithm [22]. Initialization Generation counter t 5 1 Initialize P of NP earthworm individuals randomly, such that they are distributed homogeneously within the exploration space set the number of earthworms nKEW; max generation MaxGen; similarity factor α, the initial proportionality factor β 0 , and the constant λ 5 0:9 while computing proportionality factor. Fitness assessment Assess earthworm individuals according to their position. Crossover and mutation while the best solution is not found or t , MaxGen do Sort earthworm individuals on the basis of their fitness scores. for i 5 1 to NP (for all earthworm individuals) do Apply Reproduction 1

Optimization techniques and computational intelligence Chapter | 4


Produce offspring xi1 through Reproduction 1 Apply Reproduction 2 if i , nKEW then Set the number of selected parents (N) and the produced offsprings (M) Select N parents by using roulette wheel selection Generate M offsprings with uniform crossover, considering M 5 1 and M 5 3 Producing xi2 according to M generated offsprings from Reproduction 2 else Randomly select an earthworm individual xi2 end if end for

j 5 nKEW 1 1 to NP (for all nonkept earthworm individuals) do Apply Cauchy mutation end for Assess the population according to newly generated updated positions t5t11 4 end while One of the pressing concerns for a fog environment-based technology is the delay in data transmission. Bitam et al. [23] use the honey bees algorithm, aka the bees algorithm [24], to solve the job-scheduling problem. Bitam et al. propose a technique that is inspired by the reproduction within a bee colony and the food foraging. The usage of the bees algorithm [24] is discussed below for finding the optimal value of the patch, best for foraging food. Bees display several complex behaviors like breeding, mating, and foraging. These behaviors have inspired several optimization algorithms. Here, one such technique is used, which is based on finding the optimal patch of land to search for food. At first, the algorithm dispatches n randomly selected bees to selected sites. The fitness quotient of each site is calculated and sorted in a decreasing fashion (maximization problem); m fittest locations are determined from the local search algorithm covering the best sites. The locations/sites are classified into two categories, namely elite and best nonelite sites. The number of elite sites is represented as e, and the number of best nonelite sites is represented as m 2 e: The local search method starts with recruiting forager bees in the neighborhood of the best sites. The neighborhood size is set to ngh; which can be further reduced to two classes such that ngh 5 nep 1 nsp; where nep and nsp are the number of bees recruited for elite sites and number of bees for best nonelite sites, respectively. The global search method is a random method to search for nonbest ðn 2 mÞ sites. The fitness quotients are calculated and sorted for each site until the maximum number of iterations or an acceptable error is obtained. Finally, the optimum global value is returned. for

Algorithm 4.3: The bees algorithm [24]. Generate n sized population Set batch size as m Set elite patch size e set the number of bees chosen for foraging food to the elite sites as nep set the number of forager bees around the nonelite best patches as nsp set the neighborhood size as ngh set the maximum iteration number as MaxIter set the error limit as Error i50 Generate initial population Evaluate fitness value of initial population sort the initial population based on the fitness result while i # MaxIter or FitnessValuei 2 FitnessValuei # Error do

4. For further information on earthworm optimization, refer to Ref. [22].


Applications of Computational Intelligence in Multi-Disciplinary Research

i5i11 pick elite and nonelite patches for neighborhood search Recruit bees for foraging in elite and nonelite best batches Evaluate the fitness of each patch Sort the overall results based on their fitness Run the algorithm until termination criteria are met. end while


Case studies

The essence behind applied mathematics is to bridge the gap between theoretical concepts and algorithms. Since optimization has an expansive range of applications in the real world, in order to further develop the core of the subject, it is important to abstract the encountered problem into a mathematical problem. Within this section, we deliver a few examples of solved real-world problems employing optimization techniques.

4.5.1 Case study I: floorplan optimization Definition: Floorplan optimization is a methodology to optimize the arrangement of the Floorplan blocks under given constraints such that the floor accommodates maximum blocks. VLSI (very-large-scale integration) Floorplan optimization can be thought of as designing a living room. We have assumed the necessary furniture and esthetics required for the living room (which is stage 1 of the VLSI floorplan problem) and their relative positions (stage 2). For example, a couch, center table, vase table, TV rack, and a few chairs are to be placed in the room, and as per the request, the center table is required to be in front of the couch, the vase table should be horizontally adjacent to the couch, and so on. It would be fair to assume that there are several ways of arranging the room, which include horizontally and vertically shifting the position of each item. In other words, the first stage is to obtain the individual blocks for optimal placement; the blocks are called cells. The second stage consists of obtaining the relations between the cells to determine the relative position of each item. It is the third stage, floorplan optimization, that consists of selecting the structured placement of the items such that the finished room occupies a minimum area. Below, we will cover an algorithm to solve this stage of the problem. VLSI is mainly used to design electronic components like microprocessors and memory chips, which require millions of transistors. The process of designing these chips is analogous to the above example, and the only difference is the number of components. We require the minimum area to fit in all the transistors as the cost of the chip is proportional to the area occupied on a 2D plane. We will illustrate the problem with the help of a specific example from Ref. [25] In VLSI, a pair of directed acyclic graphs5 are used to represent a floorplan, namely G and H. Each edge represents a cell, and the node represents the collective area, which is defined as the product of the maximum horizontal and vertical extent of the cells in contact with each other. Though the area of cells at hand is limited, it may have several implementations and a length-to-breadth ratio. Let us say we have N cells, and a cell ci has Iðci Þ different orientations; then the total number of possible floorplan configurations is: N21

L Iðci Þ



As we have obtained the search tree (Fig. 4.2D), the problem then boils down to finding the least value in the nodes of the graph. We have the option of exhaustively searching the tree, but it is not ergonomic. For example, suppose there are ten cells, and each cell has five orientations. The number of nodes to be traversed in order to find the optimum value increases drastically. In this case, it becomes 510. In Ref. [25], branch-and-bound search is used (Algorithm 4.1), which reduces the search space considerably. 5. Directed: Each edge in the graph has a direction (in this, edges determine the location of a cell along vertical and horizontal axes). Acyclic: There are no cycles in the graph, in other words, there is no node in the graph that can be visited again.

Optimization techniques and computational intelligence Chapter | 4


FIGURE 4.2 (A)(C) represent a model for the floorplan optimization problem; (A) depicts the cells with A, B, and C with their different orientations; (B) shows the graphs G and H, which represent the relative vertical positioning and the relative horizontal positioning of the cells, respectively; and (C) displays 1 3 3 3 2 5 6 orientations of the cells within a room of variable length and breadth.

Floorplan Design Algorithm Partition: In Ref. [25], a parallel algorithm is used to solve the problem as it saves considerable time on execution. In order to explore the search tree, each node is traversed by a separate task using fine-grained functional decomposition. The new tasks for each node are generated using a wavefront algorithm (essentially, it is used to avoid local minima) [26], which traverses the graph in a breadth-first fashion, thus losing the essential nodes which can be potentially


Applications of Computational Intelligence in Multi-Disciplinary Research

pruned and therefore causing a considerable wastage (time and computation). As for Amin ; it is assumed that Amin is associated with a task that can communicate with other tasks. Algorithm 4.4: Branch-and-bound search. Result: finds the minimum area required to place the cells using an upper bound function b&b searchðAÞ begin Amin 5 N b&b search 1ðAmin Þ end function b&b search 1ðAÞ begin score 5 eval(A) if score , Amin then if leaf (A) then Amin 5 score return solution and score else foreach child Amin of A do b&b search 1ðAmin Þ end end end G



Communication: In the case of simple recursive search across the graph, the need to communicate is only when the function returns a solution, but in the case of the branch-and-bound algorithm, there is a need to communicate in order to narrow down the necessary nodes to be visited, hence updating the bound to search for Amin . There is also a need to establish a trade-off between the communication cost and the bound obtained from the communication to obtain the most benefit. We can either centralize a task for keeping a check on just partition the graph into subtrees Amin , and each subtree will have a task to keep account of the respective Amin , which will be communicated among subtrees periodically. The problem with having a centralized task for finding the Amin is its nonscalability. With an increase in depth, the communication increases exponentially as the count of tasks to be processed parallelly increases. Agglomeration: It deals with practical setbacks related to performance. For the optimization problem at hand, the two main setbacks observed are (1) the cost of generating a huge number of fine-grained tasks and (2) organizing the tasks. The first one can be resolved with the help of agglomeration. For example, Ref. [25] suggests a task every time there is a call for search in the foreach command of Algorithm 4.1, until a specified depth is reached, after which we switch to the depth-first algorithm. Hence we obtain a task responsible for assessing the search calls in the process. The latter mainly deals with organizing the task; if there is no defined sequence, the tasks will either be performed in the sequence of their creation or be executed randomly. As we have noted earlier, the fashion of traversing remains to be breadth first, which is ineffective as it reduces the scope of pruning some branches, therefore causing misspend of the computation. In order to solve the problem, we need to define a sequence of task execution. Mapping: Mapping deals with assigning the processes (problems) to the processors (workers), typically one problem per processor. First, a set of tasks (coarse-grained) are generated by traversing the search tree to depth D. These tasks are then allotted to the workers on the basis of the processor’s engagement with the tasks. Since only one subtree is processed in a depth-first fashion at a time, pruning is effective. There are several variants of the above approach discussed in Ref. [25]. However, the choice of approach relies on the problem and processor.

Optimization techniques and computational intelligence Chapter | 4


4.5.2 Case study II: Gondwana—optimization of drinking water distribution system Definition: Optimizing network design and operations to efficiently distribute water. Gondwana [27] is a generic software platform to optimize the design of the drinking water distribution system (DWDS). The purpose of this platform is to help design and solve optimization problems pertaining to network design, system operations, etc., in the industry of DWDS, as well as provide researchers with a tool to test their models and build studies. Since the platform is generic, it is applicable to a wide variety of cases; thereafter it can also be scaled to encompass single and multiple-objective optimization problems. Gondwana represents Generic Optimization tool for Network Design , the core of which is built upon a simple input model that takes in various parameters and decision variables (hydraulics network model or operations) as per the degree of freedom specified by the user. In the following process, the optimization algorithm used is specified along with the objective function(s). Various GAs (Section are chosen to solve the objective functions because of the following reasons: G G G G

satisfactory performance over large-scale and complex problems; apt for solving nonlinear and nonconvex problems; applicable for solving continuous and discrete problems; and easy implementation of the optimization strategies.

The problem definition requires defining the objective function(s) for minimizing the cost involved, which is the most common application of optimization techniques [28]. For new networks, the process is of little account, but in the case of existing networks, the monetary value of those networks is also to be considered in the cost (objective) function. Van Thienen et al. [29] discuss the quantitative definition of the objective(s) for network optimization.


Scope of advancements and future research

Deep learning and artificial intelligence with fog: Fog can be used to process data parallelly across different devices. Usually, graphics processing units are used to achieve this task, but fog can prove to be a viable alternative. It can use a topographical multiplicity of sensors to progress the acknowledgment accurateness and decrease communication cost. The model is found to diminish communique´ cost by a factor of 20 [30] . Ref. [31] introduces edge stochastic gradient descent, a decentralized stochastic gradient descent technique to resolve a huge linear regression on the edge network. The edge arranges a mesh network, and the algorithm works by joining hands for augmenting the objective function over the edge network. The planned algorithm is completely decentralized and does not necessitate harmonization [32] (Tables 4.14.3). TABLE 4.1 Advantages and disadvantages of HC, SA, and GA. Technique



Hill climbing

Can easily perform conversions and can be directly used in discrete domains. A great algorithm to solve pure optimization problems where the motive is finding the best state with respect to the objective function. Used in circuit designing, portfolio management, and shop scheduling, etc.

Not suited for the cases where the value of heuristic value gets dropped. Hence, it is not an efficient method.

Simulated annealing

Is efficient in dealing with random systems and cost functions. Is responsible for finding optimal solution to the problems. It can handle complex cases with easy codes. Is a highly effective algorithm where heuristics methods are not to be used.

If the cost function is exclusive to compute, the annealing schedule slows down. Method is not able to indicate if the calculated optimal solution is good or not.

Genetic algorithm

A very easy concept to understand and implement with respect to other algorithms. It searches a population of points rather than searching for a single point. It is a good algorithm for a noisy environment.

The operation of this algorithm is still an art. It is time consuming and computationally exclusive.


Applications of Computational Intelligence in Multi-Disciplinary Research

TABLE 4.2 Annotations for the framework of fog computing. Annotations



Queuing delay


Transmission delay


Propagation delay


Processing delay


Total latency

Nd p

Latency of a path


Length of the frame


Transmission rate


Number of intermediate links


Representation of transmission and propagation delay for each node nA½1; 2;  ;k


Fog node i


Workload of fog node fi


Power usage effectiveness


Static power consumption


Dynamic power consumption

ηi ðαi Þ

Power effectiveness for workload αi of node fi

TABLE 4.3 Overview of optimization Problem types and their description. Problem type


Genome type


Pipe dimensioning

Determine the suitable diameter for pipes

Diameter per pipe


Network blueprint optimization

As 1, taking into account failure of system components

As 1


DMA subdivision

Subdivide a network into sectors for maximum sensitivity to leakage signals with the minimum amount of flow meters

Per node 1 of n subsections, with the requirement that subsections are fully connected geometrically

SuGS, SuS, SuM, SuC

Network transition/ replacement strategy

Generate an optimal phasing for pipe replacements from a current network to a network blueprint in a predefined number of steps

Per pipe 1 of n subsections, with no connection restrictions


*RM: random mutation, NPC: n point crossover, SeM: selection mutation, FM: flatiron mutation, CM: copycat mutation, LPM: list proximity mutation, SuGS: subdivision growth/shrinkage, SuM: subdivision merge, SuC: subdivision crossover, Co: covariator. Refer to Refs. [33] and [34] for further details.



In this chapter, we have reviewed the topics related to optimization techniques, cloud computing, and fog or edge computing. All the major and minor parts of them have been covered in the chapter. The entire motive of the chapter is to commence the rudimentary impressions related to the following technologies, which can build a core foundation for handling critical issues in later research-oriented or project-oriented environments. The focus is on understanding how

Optimization techniques and computational intelligence Chapter | 4


an optimization technique can increase the efficiency of a system related to fog computing. Afterward, the diverse approaches that can be applied to the different concepts of cloud computing and fog computing are covered. The case studies in the chapter help to better understand how an optimization technique can benefit a system. They give an insight into how the systems are implemented and how the processes for them are designed. There is a great scope for a topic like this from a research or a project point of view as the market has many well-established products, Optimization techniques play a major role in increasing efficiency with maintained low latency of the system.

References [1] H. von Alexandria, Heronis alexandrini opera quae supersunt. 5 vols, in: L. Nix, W. Schmidt (Eds.), Mechanica et catoprica, vol. 2, BG Teubner. Reprint of the 1st edit. Leipzig, Stuttgart, 1899, p. 1914. [2] P. Freeman, The secretary problem and its extensions: a review, International Statistical Review/Revue Internationale de Statistique (1983) 189206. [3] F. Bonomi, R. Milito, J. Zhu, S. Addepalli, Fog computing and its role in the internet of things, in: Proceedings of the first edition of the mcc workshop on mobile cloud computing, 2012, pp. 1316. [4] Y. Liu, J.E. Fieldsend, G. Min, A framework of fog computing: architecture, challenges, and optimization, IEEE Access 5 (2017) 2544525454. [5] K.K. Bhardwaj, A. Khanna, D.K. Sharma, A. Chhabra, Designing energy-efficient iot-based intelligent transport system: need, architecture, characteristics, challenges, and applications, Energy Conservation for IoT Devices, Springer, 2019, pp. 209233. [6] S. Kumaram, S. Srivastava, D.K. Sharma, Neural network-based routing protocol for opportunistic networks with intelligent water drop optimization, International Journal of Communication Systems 33 (8) (2020) e4368. [7] X.-S. Yang, Optimization Techniques and Applications with Examples, John Wiley & Sons., 2018. [8] J. Edmonds, Definition of optimization problems, How to Think About Algorithms, Cambridge University Press, 2008, pp. 171172. Available from: [9] Objective—an element of optimization problem. (n.d.). Retrieved from (accessed 12.07.20) [10] Constraint—an element of optimization problem. (n.d.). Retrieved from (Accessed: 12.07.20) [11] Box(of variables)—an element of optimization problem. (n.d.). Retrieved from (Accessed: 12.07.20) [12] Renuka Kumari, J. (n.d.). Reliability of metallic framed structures. [13] D.N. Kumar, Classical and advanced techniques for optimization, Optimization Methods 5 (2) (2009) 112. [14] A. Khan, N. Mushtaq, S.H. Faraz, O.A. Khan, M.A. Sarwar, N. Javaid, et al. Genetic algorithm and earthworm optimization algorithm for energy management in smart grid, in: International conference on p2p, parallel, grid, cloud and internet computing, 2017, pp. 447459. ´ . Mann, Optimization problems in fog and edge computing, Fog and Edge Computing: Principles and Paradigms, Wiley, Hoboken, NJ, [15] Z.A 2019. [16] R.K. Naha, S. Garg, Multi-criteria-based dynamic user behaviour aware resource allocation in fog computing, arXiv preprint arXiv 1912 (2019) 08319. [17] R. Li, M. Li, H. Liao, N. Huang, An efficient method for evaluating the end-to-end transmission time reliability of a switched ethernet, Journal of Network and Computer Applications 88 (2017) 124133. [18] Y. Xiao, M. Krunz, Distributed optimization for energy-efficient fog computing in the tactile internet, IEEE Journal on Selected Areas in Communications 36 (11) (2018) 23902400. [19] C.T. Do, N.H. Tran, C. Pham, M.G.R. Alam, J.H. Son, C.S. Hong, A proximal algorithm for joint resource allocation and minimizing carbon footprint in geo-distributed fog computing, in: 2015 international conference on information networking (icoin), 2015, pp. 324329. [20] D.K. Sharma, A. Garg, A. Jha, Assorted cat swarm optimisation for efficient resource allocation in cloud computing, in: 2018 fourteenth international conference on information processing (icinpro), 2018, pp. 16. [21] A. Khanna, S. Arora, A. Chhabra, K.K. Bhardwaj, D.K. Sharma, Iot architecture for preventive energy conservation of smart buildings, Energy Conservation for IoT Devices, Springer, 2019, pp. 179208. [22] G.-G. Wang, S. Deb, L.D.S. Coelho, Earthworm optimisation algorithm: a bio-inspired metaheuristic algorithm for global optimisation problems, International Journal of Bio-Inspired Computation 12 (1) (2018) 122. [23] S. Bitam, S. Zeadally, A. Mellouk, Fog computing job scheduling optimization based on bees swarm, Enterprise Information Systems 12 (4) (2018) 373397. [24] B. Yuce, M.S. Packianather, E. Mastrocinque, D.T. Pham, A. Lambiase, Honey bees inspired optimization method: the bees algorithm, Insects 4 (4) (2013) 646662. [25] Case study: floorplan optimization. (n.d.). Retrieved from (accessed 09.07.20) [26] J. Anvik, S. MacDonald, D. Szafron, J. Schaeffer, S. Bromling, K. Tan, Generating parallel programs from the wavefront design pattern, in: Proceedings 16th international parallel and distributed processing symposium, 2002, pp. 8pp.


Applications of Computational Intelligence in Multi-Disciplinary Research

[27] P. van Thienen, I. Vertommen, Gondwana: a generic optimization tool for drinking water distribution systems design and operation, Procedia Engineering 119 (2015) 12121220. [28] P. van Thienen, I. Vertommen, K. van Laarhoven, Practical application of optimization techniques to drinking water distribution problems, EPiC Series in Engineering 3 (2018) 21362143. [29] P. Van Thienen, I. Vertommen, G. Mesman, Advanced modelling and optimization in drinking water distribution systems-technical requirements and steps for water utilities, Water Solutions (2017). [30] S. Teerapittayanon, B. McDanel, H. Kung, Distributed deep neural networks over the cloud, the edge and end devices, 2017. [31] G. Kamath, P. Agnihotri, M. Valero, K. Sarker, W.-Z. Song, Pushing analytics to the edge, in: 2016 IEEE global communications conference (globecom), 2016, pp. 16. [32] A. Yousefpour, C. Fung, T. Nguyen, K. Kadiyala, F. Jalali, A. Niakanlahiji, et al., All one needs to know about fog computing and related edge computing paradigms: A complete survey, Journal of Systems Architecture (2019). [33] K.V. Laarhoven, I. Vertommen, Pv Thienen, Problem-specific variators in a genetic algorithm for the optimization of drinking water networks, Drinking Water Engineering and Science 11 (2) (2018) 101105. [34] I. Vertommen, K.V. Laarhoven, P.V. Thienen, C. Agudelo-Vera, T. Haaijer, R. Diemel, Optimal design of and transition towards water distribution network blueprints, in: Multidisciplinary digital publishing institute proceedings, Vol. 2, 2018, p. 584.

Chapter 5

Bluetooth security architecture cryptography based on genetic codons Asif Ikbal Mondal1, Bijoy Kumar Mandal2, Debnath Bhattacharyya3 and Tai-Hoon Kim4 1

Computer Science and Engineering Department, Dumkal Institute of Engineering & Technology, Murshidabad, India, 2Computer Science and Engineering

Department, NSHM Knowledge Campus, Durgapur, India, 3Computer Science and Engineering Department, Koneru Lakshmaiah Education Foundation, Guntur, India, 4Computer Science and Engineering Department, Global Campus of Konkuk University, Chungcheongbuk-do, Korea



5.1.1 Bluetooth The need for wireless technology has been fulfilled by the invention of Bluetooth. Bluetooth is a short-range radio technology that was designed to fulfill the particular need for wireless interconnection between different devices. Bluetooth is a replacement of the wire but has some constraints: cost and power consumption. Bluetooth follows ad hoc networking or ad hoc connectivity in the communication and it is setup on request. This ad hoc connection requires special security functionality [1,2]. The Bluetooth system stack is as follows: 1. Physical layer: This consists of mainly the modem part. The processing of radio signals takes place here. Filters, interference rejection, and the limit on sensitivity are implemented in this layer. 2. Baseband layer: This is present above the physical layer and consists of two parts: the upper part and the lower part. The processes taking place in this layer are the formation of packets, the creation of header, checksum calculation, retransmission, encryption, and decryption. Link controller implements the baseband and procedures [3,4]. 3. Link manager: The link manager manages the Bluetooth links. The link manager sets up links, negotiates features, and administers connections that are up and running. 4. Logical Link communication and Adaptation Protocol (L2CAP): Its work is to reformate a large chunk of data into smaller chunks [5]. Bluetooth can work with a host having computational power, such as a laptop or a mobile; in such cases, the Bluetooth module handles only the lower part of the protocol and the host handles the upper part of the protocol [6].

5.1.2 Bluetooth security architecture The Bluetooth architecture provides a description of the different types of keys and the process of link encryption. Symmetric key encryption mechanism is used by Bluetooth systems [7,8] Key types: 1. Link Key: The link key is used for authenticating the linkage between two devices. The link key is also used in the encryption process, which is called link key encryption. There are two types of link keys: a. Semipermanent key: Semipermanent keys are distinguished [9] as G Unit key: It is a link key that one device generates by itself and uses as a link with other devices. It is known to the device itself and other devices also. G Combination key: It is generated in association with other devices. The devices only involved in the generation of the key are known as the combination keys. Applications of Computational Intelligence in Multi-Disciplinary Research. DOI: © 2022 Elsevier Inc. All rights reserved.



Applications of Computational Intelligence in Multi-Disciplinary Research

b. Temporary key: There are two types of temporary keys: G Initialization key: This helps in the pairing of two devices and has a short lifespan. G Master key: This is generated by the master before the setup of encrypted broadcast communication. 2. Ciphering key: There are two types of keys: G The encryption key: This controls the ciphering. The constrained encryption key is not used directly; rather, it replaces the encryption key, making a selection of independent bits and ranging from 8 to 128 bits. [10,11]. G Payload key: It is derived from the constrained encryption key. 3. Pairing and user interaction There are two types of passkeys in Bluetooth technology: the variable passkeys and the fixed passkeys. a. Authentication Bluetooth possesses an authentication process to avoid connection with an unknown device, which might be dangerous. The authentication mechanism is used to identify devices. The authentication process is called the challengeresponse scheme, in which one device, known as the verifier, sends a random challenge to the other device, known as the claimant, and expects a valid response in return [12]. b. Link privacy Bluetooth uses radio technology to exchange data or create a connection. The major problem associated with this is eavesdropping. Eavesdropping means listening to the traffic from Bluetooth. A simple measure to check for eavesdropping is frequency hopping, but it did not prove to be of much interest as there are only 79 channels used, and upon listening to them in parallel the attacker can get all the information [13]. Security in Bluetooth devices can be implemented but it cannot secure the path from the source to the destination. An alternative for the same is encryption of the sent message [14]. 4. Protein Synthesis a. Cell: cells are the fundamental and functional unit of every living organism. b. Gene: Heredity information is stored in genes, and hence they can be considered as the fundamental physical and functional unit. Genes are made up of DNA (deoxyribonucleic acid). Genes have instructions for the synthesis of protein. However, many genes do not code for proteins. Human genes vary in size from a few hundred DNA bases to more than 2 million bases. The Human Genome Project estimated that humans have between 20,000 and 25,000 genes. Every person inherits characters from their parents, and they possess two sets of genes, one inherited from each parent. Only a small amount of genes are different in each individual, and the rest of the genes share a common character in most individuals, which means most of them are similar. Alleles are forms of the same genes with a small difference in their sequence of DNA base. The small difference in individual characteristics is due to the small amount of distinct genes in the individual [15]. Genes may have long names, and symbols are also assigned to genes, which are a short combination of letters that represent an abbreviated version of the gene name. c. Chromosome: In the nucleus of each cell, the DNA molecule is packed into thread-like structures called chromosomes. A chromosome is a structure formed by tightly winding DNA a large number of times over protein molecules called histones. These form a structure called a scaffold. Chromosomes are only visible under the microscope during cell division; at other times, they are invisible to the microscope. During the process of cell division, the DNA that forms the chromosome is compactly arranged, and due to this, the chromosome becomes visible under the microscope. Each chromosome has a constriction called the centromere, which divides the chromosome into two sections or arms. The short arm of the chromosome is referred to as the “p” arm. The characteristic shape of the chromosome is due to its long arm, and it can be used to help point out the location of specific genes. d. DNA: DNA is the hereditary material of almost all living organisms. DNA is mainly located in the cell nucleus, but apart from this, there are also organelles that have DNA, namely mitochondria and chloroplast. DNA consists of four nitrogenous bases called adenine, guanine, cytosine, and thymine. The order of these nitrogenous bases determines the information available for building and maintaining an organism. Each base pair is linked to a sugar molecule and a phosphate molecule. e. Noncoding genes: 1% of the total genes are involved in protein synthesis; the remaining 99% are called noncoding genes: they do not code for any protein synthesis. Noncoding DNA contains many types of regulatory elements.

Bluetooth security architecture cryptography based on genetic codons Chapter | 5







Promoter: It provides the binding site for the protein machinery that carries out transcription. Promoters are mainly found ahead of the gene on the DNA strands. G Enhancer: It provides binding sites for proteins that help activate transcription. Enhancers can be found on the DNA strands before or after the gene they control, and sometimes far away [16]. G Silencer: It provides binding sites for proteins that repress transcription like enhancers. Silencers can be found or after the gene they control and can be some distance away on the DNA strand. G Insulator: It provides binding sites for proteins that control transcription in a number of ways. Some prevent enhancers from aiding in transcription while others prevent structural changes in the DNA that repress gene activity. Some insulators can function as both an enhancer blocker and barrier. RNA: Ribonucleic acid (RNA) is a polymer molecule. It is involved in various biological activities like coding, decoding, regulation, and expression of genes. RNA and DNA are nucleic acids, and along with lipids, proteins and carbohydrates constitute the four major macromolecules essential for all known living organisms. RNA is formed by chaining nucleotides together, but it is mainly found in a single-stranded form in nature irrespective of that DNA is found in double-stranded form. RNA Processing: RNA has the power to modify other RNAs. Introns are spliced out of pre-RNA by spliceosomes, which contain several small nuclear RNAs; or, the introns can be ribozymes that are spliced by themselves. RNA can also be altered by having its nucleotides modified to nucleotides other than A, C, G, or U. In eukaryotes, the modification of RNA nucleotides is in general directed by small nucleolar RNAs found in the nucleolus and cajole bodies. Intron: An intron is any nucleotide sequence within a gene that is removed by RNA splicing during the maturation of the final RNA product. The word intron is derived from the term intragenic region, which means a region inside a gene. Intron refers to both the DNA sequence within a gene and the corresponding sequence in an RNA transcript. Sequences that are joined together in the final mature RNA after RNA splicing are known as exons. The Genetic Code: A total of 64 triplets of nucleotides are available for the genetic code. These triplets are genetic codons. With three exceptions, each codon encodes for 1 of the 20 amino acids used in the synthesis of proteins. There are 20 amino acids and 64 triplets available; this shows that more codons are available per amino acid, and it can be inferred that more than one codon is available for each amino acid. One codon, AUG, serves two related functions. G It signals the start of translation. G It codes for the incorporation of the amino acid methionine (Met) into the growing polypeptide.

The genetic codons can be expressed as either RNA codons or DNA codons. RNA codons occur in messenger RNA and are the codons that are actually “read” during the synthesis of polypeptides (a process called translation). Each mRNA molecule acquires its sequence of nucleotides by transcription from the corresponding gene. New genes are discovered at the DNA level before they are discovered as mRNA or as a protein product; this makes it necessary to derive a DNA codon table for reference (Fig. 5.1). DNA codons differ from RNA codons in that they contain “T” in place of “U,” which means thymine in the place of uracil (Fig. 5.2).


Survey of literature

The term Bluetooth technology was coined by Dr. Jaap Haartsen at Ericsson (1994). The term Bluetooth was adopted from the name of a renowned Viking and King who united Denmark and Norway (10th century). Originally designed as an alternative to RS-232 telecommunication cables [17,18], Bluetooth occupies a frequency range very similar to that of Wi-Fi and was mainly designed for a much shorter range and lower power consumption. A separate group was formed named Bluetooth Interest Group (SIG) whose work was to look after the developments related to Bluetooth like publications and promotions of standards and their subsequent revisions [19]. At first, the SIG group consisted of: G G G G G

Ericsson IBM Intel Nokia Toshiba.


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 5.1 The RNA codon.

FIGURE 5.2 The DNA codon.

The number of members gradually increased and reached 4000 by the end of the year 1999. The first Bluetooth device was a mobile handset (launched in the year 1999) for commercial use. With the increase in the use of Bluetooth-implemented devices, the need for secure [20] transmission came into existence. For this, the best choice was to encrypt the data sent from one device to another. For this purpose, the E0 algorithm was designed. But subsequently, a number of attacks led to the search for new and safer algorithms. In the year 1999, Miia Hermelin and Kaisa Nyberg discovered that the E0 algorithm [21] can be broken into 264 operations instead of 2128 if 264 bits of output is known. Finn Juha Vainio (2000) discovered misuse of E0. Later on in the year 2005, Lu, Meir, and Vaudenay published cryptanalysis based on conditional correlation. SAFER was developed (1993), which was an improvement over the E0 algorithm; thus far, no attack has been reported, but it also leads to fear. Nowadays, several types of Bluetooth attacks have been developed. The use of Bluetooth in IoT devices has led to the demand for better security practices and schemes [22,23]. The encryption algorithm used in the Bluetooth

Bluetooth security architecture cryptography based on genetic codons Chapter | 5


technology was E0, but later on, it was found that the E0 algorithm was vulnerable to attacks. Later on, SAFER 1 was developed, which was used in association with the E0 algorithm; as stated, no attack against SAFER 1 has been recorded; this makes it more susceptible to attacks. A good encryption algorithm is one that is less complex and can be implementable in either hardware or software [24]. The approach used here is stream cipher as it is ideal for transmission purposes. From the work done in this paper, it is evident that the encryption scheme designed in this work is simple and can be implemented in both hardware and software. Here, a mapping scheme is followed, which needs only two rounds that consist of, first, the translation of the plaintext to the DNA codon form, and the other set involves the masking of the former to a transferable form. In the E0 algorithm or SAFER 1 , four rounds or more are required for getting a good level of encryption. In SAFER 1 , even after the end of the first round, good work can be expected. All the processes use the generation of numbers and, further, use the same for the creation of keys that can be used in the cryptosystem [25]. A computer or any digital system has the capability for the generation and manipulation of the numbers (keys) to decode the ciphertext [26,27]. But none of the digital systems have the information related to the genetic codons. The generation of keys following mathematical tools like permutation and combination is also a difficult case. We have to utilize more knowledge for the creation of this genetic cryptosystem. Our approach is to use the genetic codon table, and here, in this project, only the genetic codons found in humans are used, as the genetic codon list from the Human Genome Project is very large. The genetic codons of other living organisms can also be used for the same purpose, which will provide a diverse cryptosystem with lesser implementation complexity [28,29]. An attempt using the dictionary attack was found to be not able to give an idea on breaking the code. The introns used here are susceptible to mutation within the system in an asynchronous manner, which, here, plays a key role in security [30]. These make the decoding process very difficult even after knowing the masking key and the promoter length, making it more difficult to break.


Plaintext-to-ciphertext conversion process

Encryption is the process of converting plain readable text to an unreadable format and making it accessible only to the authorized one. Encryption involves computation [31,32]. Encryption can be just plain substitution or involve mathematical computation of the substitute. It involves a lot of computation, and also, power/energy consumption is involved in the process. The cipher goes on to get stronger as the number of rounds involved in the process increases, but all encryption processes follow some mathematical process. If the mathematical process is identified and applied, then it is possible for the cipher to be decoded by someone who is the authorized person [33,34]. As stated, encryption and decryption involve a good amount of computational power, so the cost of encryption is high. It is well suited for devices having a good power source and high computational power, but for battery-powered and low-computation devices, it is a mess. One such example is a Bluetooth device running on a battery. The development of secure and economic encryption, in terms of computation and power consumption, is our goal. The process applied here for encryption and decryption is the mapping of genetic codons involved in protein synthesis with alphabetic characters [35]. Protein formation is a process taking place in living organisms. It involves a lot of processes and the information-storing unit, the DNA. There are specific genetic codons present that are involved in the formation of proteins. The same process is applied here to encrypt the plaintext [36,37]. Altogether, there are 64 codons available, which store the information for the type of protein they form. The genetic codons are substituents of the alphabets in the ciphertext. However, not only are genetic codons chiefly responsible for the encryption and decryption process [38,39], but also, the DNA contains the information for the formation of specific proteins and the sequence of the codons needs to be extracted so as to start the protein synthesis: firstly, the promoter genes, which have the information on where to start the process, and thereafter the intron genes, which are not involved in the protein formation, are removed, and then the transcription process starts. This process of protein synthesis is exploited to encrypt and decrypt the whole message or data to be sent (Fig. 5.3).

5.3.1 Basic workflow Extra information needs to be passed: G G

Promoter size Mask data.


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 5.3 Basic workflow between two devices. Encryption There following steps in the encryption process. 1. 2. 3. 4. 5.

Loading of the message to be sent Conversion of the message into DNA/RNA format Introduction of promoter genes at the start of the output produced in step 2 Feed introns at random places in the output of step 3 Masking of the output of step 4 with random characters. Decryption There following are the steps in the decryption process. 1. 2. 3. 4. 5.

Reception of the message. Removal of the mask. Removal of the introns Removal of the promoters Mapping of the output of step 4 to the DNA table to get the plaintext.

5.3.2 Algorithm Encryption Plaintext to DNA/RNA codon conversion The steps involved in this process are as follows: 1. Declare two arrays: first for storing English alphabets in lowercase called the character table and second for storing the DNA/RNA genetic codon table. A genetic codon is a triplet formed out of four characters (nitrogenous bases found in living organisms), “A”, “C”, “G”, and “T”/“U”: “T” if the format used is DNA; “U” if RNA format is used. 2. Declare a variable “stage_1_cipher”. 3. Load the text file to be encrypted. 4. Start reading each character of the text file. 5. Check whether the character is present in the character table.

Bluetooth security architecture cryptography based on genetic codons Chapter | 5

6. 7. 8. 9. 10.


If the character is found, then get the index “char_index” of the character in the character table. Using “char_index,” get the genetic code corresponding to the same index position in the genetic codon table. Now append the genetic code to the “stage_1_cipher.” Repeat steps 4 to 8 until the end of the file/string is found. End the program.


First: generation of promoters Second: addition of promoters to the output of the first process.

The promoter is that part of the ciphertext that is needed to be removed to correctly get the decryption point. It is of variable length and has the same character sequence as the DNA text produced in the above process. In genetics, the promoter is the point at which protein synthesis starts.

Generation of promoters

The steps involved in this process are as follows: 1. Initialize an array with the character “A,” “G,” “C,” “U.” 2. Create a variable to store the created promoter string named “promoter.” 3. Generate a random number in the range as specified. Here, we have specified the range to be between 10 and 500. This is the length of the promoter that is needed to be generated. 4. Start adding the character from the array by choosing characters at random. There is no sequence to the characters chosen and they do not confer any meaning. 5. Keep on adding the characters to the string “promoter.” Remove any leading and trailing whitespace from the string. 6. Repeat steps 4 and 5 until the specified promoter length generated in step 3 is met. 7. Store the variable in the “promoter” string.


Applications of Computational Intelligence in Multi-Disciplinary Research


Promoter addition

The two algorithms mentioned above give the necessary material for the next process. Promoter addition takes place as follows: 1. The output of the process in which the plaintext is changed into DNA/RNA format is stored in a variable called “feed.” 2. Next, the promoter we get from promoter generation is stored in the “promoter” string. 3. The strings promoter and feed are concatenated as follows: New_string 5 Promoter 1 Feed It is necessary to place the promoter in the first place. Intron addition Intron addition involves these processes: First: Intron number generation, which involves the number of introns/quantity of introns to be placed in the ciphertext generated up to this round. Second: Finding the position of intron placement in the output of the above ciphertext. Third: Addition of the introns at their respective positions.

Intron number generation

1. Input the ciphertext generated up to this round and store it in the variable “Feed.” 2. Calculate the length of the string “Feed.” 3. Check If the length of the string is between 1 and 10: Add nothing to the text. Else: Check whether the length of the string is between the given limit: If TRUE: Take the upper limit of the specified range. Return the number of intron as upper range divide by 200. Else: Increase the lower limit by a factor of 10. Increase the upper limit by a factor of 10. Position to place the introns The steps involved in the process are as follows: 1. Declare an array “intron_position”. 2. Generate random numbers that are in the range (3, length_of_intron - 3). 3. Check whether the generated number is already present in the array or not: If PRESENT: Skip and go for the next number. Else: Append the number into the array/list.

Bluetooth security architecture cryptography based on genetic codons Chapter | 5


4. Keep appending these numbers generated in step 2 until the length of the array is equal to the length of the promoter.

Placing the introns at their positions



Applications of Computational Intelligence in Multi-Disciplinary Research


Bluetooth security architecture cryptography based on genetic codons Chapter | 5


4. Check the time: If minute is in the range of 0 to 19: Then pair key is equal to promoter_length-255. If minute is in the range of 20 to 39: Then if length of promoter is greater than 255: Then pair key is equal to 255 1 promoter_length. Else: Pair key is equal to promoter_length-255. If minute is in the range of 40 to 59: Pair key remains the same. The mask generated in this process is sent to the other device once the pairing is done. Decryption Removal of the mask The steps involved in this process are as follows: 1. Declare an array mask_remove. 2. Initialize an array DNA. 3. The mask string received during the pairing process is converted into individual characters; for example, string XYWS is converted into “X,”“Y,”“W,”“S” and stored in the array mask_array. 4. Read each character and get the index of the character in the mask array; corresponding to the index in the mask array, find the item in the DNA array. 5. Replace each character of text with the element from the DNA array. 6. Repeat steps 4 and 5 to get the text in DNA format.

Removal of the introns

The steps involved in this process are as follows: 1. 2. 3. 4. 5.

Initialize the intron array which contains the introns. Create a variable to store the intron-removed text called purefeed. Read the output. Replace each intron from the text. Store the text without the intron into the variable purefeed.

Removal of the promoter

Promoter removal just requires the removal of that part of the noncoding DNA. Here, the length of the promoter is passed at the time of pairing but it cannot be directly used. It is time dependent. The pairing key is passed and processed to get the original length of the promoter. Removing the promoter involves the following steps. 1. The purefeed generated as the output is used. 2. The length of the text for further processing can be found by removing that part of the string whose length is equal to the promoter length. 3. The removal should involve removing that part of the text starting from the beginning whose length is equal to the promoter length. 4. The text after this whole process is ready to be converted to the original text.

Conversion of the ciphertext without the intron and promoter to plaintext

The steps involved in the process are as follows: 1. 2. 3. 4.

Input the text generated from the above process. Declare two arrays: first: alphabet, which has the alphabets in English; and second: which has the genetic codons. Declare a variable word to store the triplet. Add three characters to make a word.


Applications of Computational Intelligence in Multi-Disciplinary Research

5. Check whether word exists in the genetic codon list If TRUE: Then get the index of the genetic codon in the genetic codon list. Find the item in the alphabet list with the index that corresponds to that of the genetic codon. Replace the genetic codon with the corresponding character in the alphabet list having the same index as that of the genetic codon. 6. Repeat steps 4 and 5 to get the original text.

5.3.3 Analysis and discussion Encryption involves mathematical processes and a number of rounds for making the cipher strong. Encryption done using genetic codon/DNA-based cryptography is aimed at getting the same of encryption in a single round, thus reducing power consumption and the need to have a high computational requirement. The ciphertext produced needs preprocessing before decryption as it does not depend on any mathematical process, since, in encryption available till now, if a function is found to correspond to a round of encryption, then it becomes easy to decipher the round. The mask and the introns make it difficult to decode the overall process and they need to be removed to get the original process and understand it. There are a number of genes in humans, estimated to be 30,000. In total, 64 codons are responsible for the production of proteins in humans. Upon mapping each codon to every character of the alphabet, we will get a set of codons for the English alphabet. Each alphabet position can be changed to get a new combination (might not be completely new). We can get 27 combinations! (This includes 26 letters of the alphabet and a blank space). This is a quite huge number of combinations. Trying to decrypt the ciphertext without removing the mask and introns will not give the desired result. The most important part consists of removing the mask as the mask is not predefined in the program; it is generated every time the encryption and pairing take place. During the pairing process, the mask information is sent to the other device once the acknowledgment for the pairing is confirmed. The mask is sent as a string and the string needs to be processed to get the mask information (the string needs to be converted to individual characters and stored in an array). Until and unless the mask is present, even if someone knows the base sequence, they will need to know the sequence and the order of the mask. G


Decryption without removing the intron: The introns used have a word length of 4 as compared to genetic codons, which have a word length of 3. Decryption without removing the promoter: The promoter shows the point at which the decryption process should be started. If one has to decode the text without the necessary information, then the promoter needs to be removed. Promoters are created randomly and have no mapping available, so they will hinder the decryption in the absence of proper information (Figs. 5.45.6). FIGURE 5.4 Device layout (user needs to feed the device name in both devices).

Bluetooth security architecture cryptography based on genetic codons Chapter | 5


FIGURE 5.5 Pairing request from one device to another.

FIGURE 5.6 Showing the data has been sent from one device and received by another device.



Bluetooth is a low-energy device that uses radio waves for communication. The data sent is encrypted. Encryption algorithms have high complexity and require a good amount of energy and computational power. The encryption algorithm in Bluetooth needs to have high security but should consume lesser energy. DNA-based encryption is generated in a single round, whereas other algorithms require multiple rounds to fully encrypt the data with sufficient strength. Bluetooth, when attached to a host that provides good computational proficiency and energy, does not need to bother about the process and requirements, but for battery-operated systems, it needs to have an algorithm with sufficient strength but economic processing. The large dataset of the genetic codon/base pairs makes it hard to predict. This shows that DNA-based cryptography provides sufficient security with lesser computation and energy consumption.


Future work

DNA mappingbased encryption and decryption provide encryption in a smaller number of rounds, as, in this paper, this is achieved in a single round; this proves to be economic in terms of computation and energy consumption. Bluetooth requires a cryptography system that requires a lesser amount of energy and computation power such as DNA-based cryptography, which can complete the whole encryption in only a single round. Bluetooth uses a stream cipher, which sends data as a whole; if block cipher is used, then every block will be encrypted with a different mask,


Applications of Computational Intelligence in Multi-Disciplinary Research

making it more difficult to crack. The use of DNA codons for cryptography also uses a lesser amount of memory as every single alphabet is converted into a codon of length 3, which means every codon has three alphabets. The genetic codons used here contain only the genetic codons that are responsible for protein synthesis. There are a lot more genes and yet more to be found. These can be used for this process. The Human Genome Project shows that about 30,000 genes and millions of bases exist, thus providing a lot of codons to map. This is only for the human genes; there are a lot of other organisms. If the data of the genetic codons is maintained and used for the encryption process, then it will be very hard to search the entire database, thus providing a very effective encryption mode with lesser computation and energy.

References [1] A. Kurawar, A. Koul, V.T. Patil, Survey of bluetooth and applications, International Journal of Advanced Research in Computer Engineering & Technology 3 (2014) 28322837. [2] H. Chen, B. Chan, Y. Joly, Privacy and biobanking in China: a case of policy in transition, The Journal of Law, Medicine & Ethics: A Journal of the American Society of Law, Medicine & Ethics 43 (2) (2015) 726742. [3] D. Chalmers, Biobanking and privacy laws in Australia, The Journal of Law, Medicine & Ethics: A Journal of the American Society of Law, Medicine & Ethics 43 (4) (2015) 703713. [4] Y. Zhao, X. Wang, X. Jiang, L. Ohno-Machado, H. Tang, Choosing blindly but wisely: differentially private solicitation of DNA datasets for disease marker discovery, Journal of the American Medical Informatics Association 22 (1) (2015) 100108. [5] M. Hosseini, D. Pratas, A.J. Pinho, Cryfa: a secure encryption tool for genomic data, Bioinformatics 35 (1) (2018) 146148. [6] T. Bose, M.H. Mohammed, A. Dutta, S.S. Mande, BINDan algorithm for lossless compression of nucleotide sequence data, Journal of Biosciences 37 (4) (2012) 785789. [7] B.K. Mandal, D. Bhattacharyya, T.-h Kim, An architecture design for wireless authentication security in bluetooth network, International Journal of Security and Its Applications 8 (3) (2014) 18. [8] B.K. Mandal, D. Bhattacharyya, T.-h Kim, A design approach for wireless communication security in bluetooth network, International Journal of Security and Its Applications 8 (2) (2014) 341352. [9] A. Biryukov, D. Khovratovich, Nikolic, Distinguisher and related-key attack on the full AES-256, Advances in Cryptology-CRYPTO 2009, Springer, 2009, pp. 231249. [10] C. Ashokkumar, R.P. Giri, B. Menezes, Highly efficient algorithms for AES key retrieval in cache access attacks, in: IEEE European Symposium on Security and Privacy (EuroS&P), IEEE, 2016, pp. 261275. [11] C. Bouillaguet, P. Derbez, O. Dunkelman, P.A. Fouque, N. Keller, V. Rijmen, Low-data complexity attacks on AES, IEEE Transactions on Information Theory 58 (11) (2012) 70027017. [12] N. Daniels, A. Gallant, J. Peng, L. Cowen, M. Baym, B. Berger, Compressive genomics for protein databases, Bioinformatics 29 (2013) 283290. [13] B.K. Mandal, D. Bhattacharyya, X.-Z. Gao, Data security using 512 bits symmetric key based cryptography in cloud computing system, International Journal of Advanced Science and Technology 129 (2019) 110. [14] J. Dunning, Taming the blue beast: a survey of bluetooth based threats, IEEE Security & Privacy 8 (2) (2010) 2027. [15] K. Haataja, P. Toivanen, Practical man-in-the-middle attacks against bluetooth secure simple pairing, in: 4th International Conference on Wireless Communications, Networking and Mobile Computing, 2008, WiCOM’08, pp. 15. [16] T.-C. Yeh, J.-R. Peng, S.-S. Wang, J.-P. Hsu, Securing bluetooth communication, International Journal of Network Security 14 (4) (2012) 229235. [17] P. Cope, J. Campbell, T. Hayajneh, An investigation of bluetooth security vulnerabilities, in: Proceedings of the 7th IEEE Annual Computing and Communication Workshop and Conference, Las Vegas, NV, USA, 911 January 2017, IEEE CCWC. [18] A. Harris, V. Khanna, G. Tuncay, R. Want, R. Kravets, Bluetooth low energy in dense IoT environments, IEEE Communications Magazine 54 (2016) 3036. [19] S.S. Hassan, S.D. Bibon, M.S. Hossain, M. Atiquzzzaman, Security threats in bluetooth technology, Computer Security 74 (2017) 308322. [20] J. Padgette, K. Scarfone, L. Chen, Guide to bluetooth security, NIST Special Publication 800 (121) (2012) 25. [21] T.R. Mutchukota, S.K. Panigrahy, S.K. Jena, Man-in-the-Middle Attack and its Countermeasure in Bluetooth Secure Simple Pairing, 157, Springer, 2011, pp. 367376. [22] T. Panse, P. Panse, A survey on security threats and vulnerability attacks on Bluetooth communication, International Journal of Computer Science and Information Technologies 4 (5) (2013) 741746. [23] S. Peng, S. Yu, A. Yang, Smartphone malware and its propagation modelling: a survey, IEEE Communications Surveys & Tutorials 16 (2) (2014) 925994. [24] Bluetooth for unclassified use: guidelines for developers, Technical Report, National Security Agency (NSA), May 2015. [25] M. La Polla, F. Martinelli, D. Sgandurra, A survey on security for mobile devices, IEEE Communications Surveys and Tutorials 15 (1) (2013) 446471. [26] W. Albazrqaoe, J. Huang, G. Xing, Practical bluetooth traffic sniffing: systems and privacy implications, in: 14th Annual International Conference on Mobile Systems, Applications, and Services, New York, USA, ACM, 2016, pp. 333345.

Bluetooth security architecture cryptography based on genetic codons Chapter | 5


[27] How bluetooth technology works,, 2016. [28] J. Wright, J. Cache, Hacking Exposed Wireless: Wireless Security Secrets & Solutions, third ed., McGraw-Hill Education, New York, 2015, pp. 189324. [29] A. Hilts, C. Parsons, J. Knockel, Every step you fake: a comparative analysis of fitness tracker privacy and security, University of Toronto, Toronto, ON, 02/02/2016 2016, T. Wolverton, Bluetooth May Be the Key to Your Future Smart Home, in The Mercury News (Ed.), San Jose, CA: Digital First Media, 2014. [30] G. Grispos, W.B. Glisson, J.H. Pardue, M. Dickson, Identifying user behavior from residual data in cloud-based synchronized apps, Journal of Information Systems Applied Research 8 (2) (2015) 414. [31] R. Shetty, G. Grispos, K.K.R. Choo, Are you dating danger? an interdisciplinary approach to evaluating the (in) security of android dating apps, IEEE Transactions on Sustainable Computing (2017) 1. [32] W.B. Glisson, T. Storer, M. Campbell, A. Blyth, G. Grispos, Inthe-wild residual data research and privacy, Journal of Digital Forensics, Security and Law 11 (1) (2016) 736. [33] A. Jedda, A. Casteigts, G.-V. Jourdan, H. Mouftah, Bluetooth scatternet formation from a time-efficiency perspective, Wireless Networks (10220038) 20 (5) (2014) 11331156. [34] V. Tsira, G. Nandi, Bluetooth technology: security issues and its prevention, International Journal of Computer Applications in Technology 5 (2014) 18331837. [35] P. Satam, S. Satam, S. Hariri, Bluetooth intrusion detection system, in: Proceedings of the 15th IEEE International Conference on Computer Systems and Applications, Aqaba, Jordan, November 2018. [36] V. Dubey, K. Vaishali, N. Behar, G.A. Vishwavidyalaya, Review on bluetooth security vulnerabilities and a proposed prototype model for enhancing security against MITM attack, International Journal of Research Studies in Computer Science and Engineering (2015) 6975. [37] S. Pallavi, V. Narayanan, An overview of practical attacks on BLE based IOT devices and their security, in: Proceedings of the 5th IEEE International Conference on Advanced Computing & Communication Systems (ICACCS), Coimbatore, India, March 2019, pp. 1516. [38] R. Jhingran, V. Thada, S. Dhaka, A study on cryptography using genetic algorithm, International Journal of Computer Applications 118 (20) (2015) 09750987. [39] M. Kumar, A. Aggarwal, A. Garg, A review on various digital image encryption techniques and security criteria, International Journal of Computer Applications 96 (13) (2014).

This page intentionally left blank

Chapter 6

Estimation of the satellite bandwidth required for the transmission of information in supervisory control and data acquisition systems Marius Popescu and Antoanela Naaji Faculty of Economics, Computer Science and Engineering, “Vasile Goldis” Western University of Arad, Arad, Romania


Analog inputs Analog outputs Application protocol data unit Application service data unit Asynchronous transfer mode Bandwidth Code division multiple access Counter inputs Customer premises equipment Demand assignment multiple access Digital INputs Digital outputs Digital video broadcast Digital video broadcastingreturn channel via satellite Frequency division multiple access Global Operation Center of the VSAT provider General packet radio service Inputs/outputs Information points—abbreviation for all inputs/outputs Local area network Low noise block downconverter Multiprotocol label switching Process modeling software Return channel satellite Remote terminal unit Supervisory control and data acquisition Transmission control protocol/internet protocol Time division multiplexing Time division multiple access Virtual private network Very small aperture terminal

Applications of Computational Intelligence in Multi-Disciplinary Research. DOI: © 2022 Elsevier Inc. All rights reserved.



Applications of Computational Intelligence in Multi-Disciplinary Research



In order to maintain some parameters, a monitored process, within certain limits, as well as for the safe and efficient operation of the installations, computer-assisted management structures such as supervisory control and data acquisition (SCADA) have been designed. The SCADA systems are used for the monitoring and acquisition of data from technological processes and they are made out of components of different nature that are connected to each other. A SCADA system gives the beneficiary the opportunity to monitor in real-time the operation of the network, improving the ability to control and react quickly and efficiently to any potential risk of interruption of its activity. SCADA systems contribute to increasing the safety of the operation and also to better financial and operational management [1,2]. The communication system for a SCADA system is, in general, an IP-based network in which each node has its own address (static IP) and which allows two-way exchange, respectively, from a Main Central Dispatch to other locations, and vice versa. Such a system is formed on two levels, which have a common contact point: the first level consists of the connection of the Main Central Dispatcher, the Secondary Central Dispatcher, and the local regional centers, and the second one is formed from the connection between each work point and one of the Central Dispatches. The satellite is an attractive option for business and government activities that cannot benefit from the usual communications services due to their geographical positioning [3]. For example, the internet network must provide as much coverage as possible, cancel the effects of long distances and isolate hard-to-reach areas, and be implemented as quickly as possible. These requirements are difficult to meet on the ground but adequate for satellite technologies. The satellite represents an independent infrastructure of terrain and distance. Satellites are by their nature robust and provide a long time of availability, compared with the terrestrial solutions (optic fiber, metallic or wireless cable). Thus compared to terrestrial communication infrastructures, remote locations can benefit from satellite access. Satellite communications technologies have the potential to accelerate the availability of internet services. The ultimate goal of satellite networks is to provide support for applications and services available on terrestrial networks [4]. These applications and services produce both traffic, with a wide variety of requirements regarding the network resources and the quality of the services as well as the requirement of the integration of the terminals used in telecommunications. The satellite systems compete with cellular services and digital radio networks in bands ranging between 0.1 and 30 GHz (microwaves). Satellite communications are unique in terms of the ability to produce and receive interference over large areas. Therefore the operation of ground and air stations raises the issue of frequency coordination. Satellite systems are sources of interference for ground stations operating on the same frequencies [5]. There are important differences between satellite and terrestrial radio communications, such as: G



the coverage area of a satellite communications system far exceeds the coverage area of a terrestrial radio communications system; transmission costs are independent of distance, within the satellite coverage area; point-to-point, broadcast, and multicast applications are directly available; very high transfer rates are available to the user; even if the satellite communications links are interrupted for a short time, the quality of the transmission is very high; in many cases a terrestrial transmission station can receive its own transmission.

In this chapter, we proposed and validated an algorithm in tabular form for the calculation of the satellite bandwidth, used in designing, when additional remote terminal units (RTUs) are introduced in the SCADA systems. The chapter is organized as follows: Sections 6.2 and 6.3 describe the SCADA system and very small aperture terminal (VSAT) networks, and Section 6.4 presents the proposed algorithm for estimating the satellite bandwidth and the results of the evaluation of the algorithm based on a case study, which are also discussed. In Section 6.5, some challenges and future work are identified, and in Section 6.6, some conclusions are drawn.


Supervisory control and data acquisition systems

The main components of a SCADA system are the following: G G

measuring components that can be transducers, intended for pressure, flow, temperature measuring, etc.; action and automation components, which can be controlled remotely;

Estimation of the satellite bandwidth Chapter | 6









hardware components: servers, workstations, printers, monitors, synoptic displays, intelligent process management modules, programmable logic control modules, storage units, etc. For the SCADA systems, various equipment are used, the role of each one being well determined, as mentioned below; data acquisition equipment that take the parameters from the measurement components of the process, which can be PC, controller, or generically called Remote Terminal Unit (RTU); the application servers that centralize, in real-time, the data acquired from the process, create the database of the application, and ensure, in real-time, all the specific functions for the supervision of a process; workstations for operating personnel, management departments, and technical and economic services that analyze the data provided by the system in real-time; printers for reports and protocols; software components: operating systems, process data acquisition systems, database management systems, simulation programs, communications programs, data archiving/restoration programs, etc.; they provide the means of tracking, visualization, data processing, and communication between the different elements of the SCADA system; communication components: the communications can be carried out in different ways, such as local area network (LAN), telephone lines (rented or own), terrestrial radio communications means, satellite communication means, and virtual private network (VPN). As the communications ensure the vital data flow of the system, redundant means of communication are used, as the case may be, to prevent partial or total system failure.

The structure of SCADA systems can be hierarchical on functional levels (Fig. 6.1): 1st level, the level at which the primary equipment is located; 2nd level, the level at which data acquisition, equipment protection, time monitoring, and synchronization are performed; 3rd level, where the process management equipment, represented by the central computer, printers for event printing, and the control center of the communication network, are located. The main functions of a SCADA system are the following [6,7]: G











Data acquisition and exchange: this function is used to provide the interface of the computer system destined for operational management with data acquisition equipment and external computer systems. Data processing is a function that includes the processing of scanned analog data, realizing their conversion into engineering units, and verifying their classification within the predetermined limits; the processing of accumulated data (energies) scanned, which consists in the conversion of the number of impulses; and real-time calculations: summations, decreases, multiples, divisions, hourly averages, hour maxima and minima, balances. The post-factum magazine, through which instantaneous data readings are made every 10 seconds (“snapshot”). If an event is preset, the circular file freezes, taking 30 consecutive snapshots taken every 10 seconds, with these data being a “backup set.” The historical information system, with which the updating and completion of the databases are carried out. Generally, a relational database management system that is commercially accessible is used. Remote control or remote adjustment in installations is a function that allows the operation of the primary equipment in the gas transmission network. Marking represents the action of the operator by which attention is visually drawn to an equipment on a scheme. The signaling is activated when a command is not to be executed or must be carefully executed. The user interface allows the general information of the operative command personnel on the topology and the status of the driven objective and is realized through humanmachine interfaces. The display panel allows a much better analysis of the involved situations. Alarm processing and management: the system recognizes the inadequate operating conditions of the electrical equipment and networks and optically and/or acoustically warns the staff that is currently on the shift about what happened. Sequential recording of events is a function that allows selecting a series of elements in the transport network for the sequential recording of the change of their state. Messages from the sequential recording of events are treated separately from those related to normal status changes, as they are not part of the alarm handling process. Password processing is a function that ensures the management of the access of the potential users destined to the operative management of the processes or in certain areas of the computer system. Monitoring the state of the computer system is a function that ensures the monitoring of the operating status of the different components and of the computer system as a whole. The management system will have to be a redundant one, consisting of two computers, one of which is in operation and the other in hot reserve. The communication is done through a switch connected to the two computers. Data recording and acquisition are done automatically, these being subsequently stored [8].


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 6.1 Architecture of a supervisory control and data acquisition system on levels: (A) 1st level—architecture of the main central dispatch; (B) 2nd level—regional local center architecture; (C) 3rd level—locations architecture.

Interconnections are achieved through distinct technological solutions (Fig. 6.2) [9], such as: G


the connection between each regional local center and the Main and Secondary Central Dispatchers is achieved through a Multiprotocol Label Switching (MPLS) connection, using the national telecommunication system (level 1). MPLS is a routing technique in telecommunications networks that directs the data from one node to the following one based on short-way labels, rather than on long network addresses. Thus the searches in a routing table are avoided and also the acceleration of traffic flows [10]; the connections between each ground location and the Main Central Dispatch are made by satellite (level 2); the system redundancy is performed at the first level by the MPLS service provider, and at the second level by a General Packet Radio Service (GPRS) solution. For example, at level 1 of the connections, the bandwidth between the Main Central Dispatcher and the Secondary Central Dispatcher must be 100 MBit/second. The bandwidth between the other locations must be 10 MBit/second.

Estimation of the satellite bandwidth Chapter | 6


FIGURE 6.2 Interconnections of a supervisory control and data acquisition system.

The corresponding connection for level 2, using the VSAT satellite, has to ensure secure and real-time communication in hard-to-reach areas. The detection and localization software is integrated into a general software called Process Modeling Software (PMS), which is a complex model that works in real-time. The PMS module must work on PMS workstations and SCADA servers. SCADA is the server responsible for the system of real-time data collection received from the field, and PMS is a client connected to this server. The PMS will be installed in both the Main and the Secondary Dispatches.


The very small aperture terminal networks

6.3.1 The satellite communication systems The advantages of using satellite transmissions are: G



dual system: the users will be able to transmit and receive information either via the terrestrial cells or, in their absence, directly by satellite; the users of the conventional terrestrial (mobile) services will be able to switch to the satellite service when they are outside the coverage of the classical network; the satellite connection can provide routes for blocked calls on the terrestrial network due to possible shading by various geographical structures; global multifunctional system: interfaces between all existing (mobile) terrestrial systems, such as GPRS and GSM AMPS; in addition to voice communications, data communications, paging, and faxing are carried out by implementing external data ports and internal memory buffers; global system that also includes access to the Internet, video conferencing, and multimedia.

A satellite network consists of one or more access nodes (gateway) and a number of satellite terminals with transmission and reception capabilities [11]. The network uses the resources of one or more communication channels


Applications of Computational Intelligence in Multi-Disciplinary Research

(transponders). The characteristics of the satellite terminals and of the access nodes vary according to the services for which they are intended. Thus the communities of domestic users require cheap and high-level integration terminals, as well as rather terrestrial access nodes. Professional users of satellite services adopt expensive terminal solutions with the ability to aggregate network traffic. Satellite networks are used to provide two types of basic services: TV and telecommunications (associated with fullduplex, symmetrical, or asymmetrical services). The network is based on a ground segment and uses available onboard resources in the satellite. The ground segment is composed of the user segment and the control and management segment. The user segment includes the satellite terminal, connected to the customer premises equipment (CPE) directly or via the LAN, and access stations, such as the hub or gateway, generically called access network terminals, connected to terrestrial networks: G

satellite terminals are ground stations connected to the CPE, which send or receive signals to or from the satellite; CPE include telephone devices, TV, and computers, which are independent of network technology and can be used in both satellite and terrestrial networks; access gateways perform interconnection functions between the satellite and terrestrial networks. The control and management segment consists of: a mission and management center (administration) of the network, which ensures high-level functions for all satellite networks in the satellite coverage area; interactive network management centers that produce functions for a single satellite network; real-time control centers for the connectivity and resources associated with the terminals that constitute the network.





In a network, there can be two types of connections: unidirectional, through which one or more stations transmit and others only receive, and bidirectional, through which the ground stations transmit and receive. Two-way links are associated with star and mesh topologies and are used to transport full-duplex services. The connection from Earth to the satellite is called the ascendant path (uplink) and the one from the satellite to Earth is called the descendent path (downlink). The link between the satellite and ground fix station (gateway) is called feeder-link. In a network with a star topology, each node can only communicate with the central node (hub). A multistar topology has several central nodes. All other nodes can only communicate with the central nodes. In general, the central node is a large ground station (the antenna size varies between a few meters and 10 m), with high emission power and a high gain factor. From the power point of view, the star network is more tolerant (fewer restrictions) compared to the mesh network with transparent satellite, due to the high-capacity central station. In a star topology [12], the traffic is concentrated in the Main Central Dispatch and Secondary Central Dispatch, for reasons of communication redundancy. Also, satellite communication is of a bidirectional type. The bidirectional systems are based on the Digital Video BroadcastingReturn Channel via Satellite (DVB-RCS) standard for the return channel, allowing smaller parabolic antenna sizes [13]. Satellites operate analogously, and the frequency bands reserved for satellite communications are given in Table 6.1 [14]. Access to satellite communications is very suitable for rural areas. However, the major disadvantages are the costs (especially for two-way systems) and delays of about 600 Ms (round trip).

TABLE 6.1 Frequency bands allocated to satellite communications. Band

Domain of frequency (GHz)

Total band length (GHz)





MSS (Mobile Satellite Service)




MSS, NASA, remote space research




FSS (Fixed Satellite Service)




FSS military, ground exploration, forecast satellites




FSS, BSS (Broadcast Satellite Service)









Estimation of the satellite bandwidth Chapter | 6


The delivery of satellite band services is faced with the problem of latency (inherent delays between sending and receiving a message in the range of 540800 Ms in a typical environment). For some applications, such as email and web browsing, latency is not an impediment [15]. As a result of the Transmission Control Protocol (TCP) [16], latency does not exert any noticeable effects except for real-time applications. Because the latency depends on the radius of the satellite’s orbit (the distance to Earth’s trajectory), the latency of a satellite in low orbit is smaller than that of a geostationary satellite. The congestion of the communication network is the phenomenon of deterioration of the quality of the services produced by the overloading of the nodes of the network, in which important quantities of data are transported. The congestion has several causes: either the routers are not fast enough (their processors are too slow and fail to empty their queues in a timely manner, even if the network is sufficiently free), or they send more streams which then need to be sent on the same output line (the queues are added again), or the buffers are not large enough and packets are lost. In the case of very high traffic, the situation can be so aggravated that any of the packages are no longer delivered. A transponder is a satellite-based electronic system required for the conversion of uplink signals into downlink signals and may contain a transmitter/receiver, an amplifier, and a frequency and polarization converter. The channels used for the satellite network, normally, are of high capacity, often being limited by the terminal equipment. A satellite communications network can allow the simultaneous addressing of all its stations, a feature that has a considerable advantage when handling value tables on all or only part of the network terminals.

6.3.2 Architecture very small aperture terminal networks VSAT networks have various characteristics, such as operating in one or more frequency bands in the exclusive area of the following bands allocated to FSS (Fixed Service Satellite): 14.0014.25 GHz (Earth to space), 12.5012.75 GHz (space to Earth), or in the shared areas of the following bands allocated FSS 14.2514.50 GHz (Earth to space) and 10.7011.70 GHz (space to Earth). All these networks operate with geostationary satellites, and terminals are provided for unattended operation, with an antenna diameter upto 3.8 m. The European Telecommunication Standards Institute provides specifications for standardizing the characteristics of VSAT networks operating as part of a satellite communications network (in star, mesh, or point-to-point configuration). The network is coordinated by a ground station called the hub. The architecture is generally a star type and offers great flexibility through single-hop as well as double-hop connections. In the case of a single-hop connection, the data is transferred between the hub and a VSAT terminal. The double-hop connection allows data transfer between two VSAT terminals through the hub. In the case analyzed in this chapter, the satellite connection is of the double-hop type; the communication path between the SCADA location and the Main/Secondary Dispatcher is SCADA location -. satellite -. central hub communication by the VSAT satellite of the operator -. satellite—. Main Dispatcher. Direct connections between VSAT terminals are possible if the power received by the satellite is sufficient and at the same time if the satellite has sufficient power to relay the signal. The system is economical because VSAT stations have antennas of small diameters (below 2.5 m) and low emission power. Therefore direct connections between stations are impossible. The hub is equipped with a large antenna, capable of receiving the low power signals of a VSAT, relayed by the satellite. It transmits or retransmits high-power data so that they can be received using the small antennas of the VSAT terminals. The hub is used for routing traffic and ensures network switching. In conclusion, the hub performs processing and switching functions that are not available on a satellite, which is generally a classic repeater. VSAT networks operating in 46 GHz bands face serious problems regarding the interference with neighboring satellites and terrestrial microwave networks that use the same frequency range. This has required the use of scattered spectrum transmission techniques, with the satellite transmission capacity being used less efficiently. Interferences are reduced in the 1214 GHz bands; here the connections are affected by atmospheric phenomena, in particular rain (depolarization, attenuation). In the double-hop mode, one VSAT terminal accesses another terminal through the hub. All signals are received by the hub, which acts as a processor, decoding, demultiplexing, regenerating, multiplexing, encoding, and transmitting all data transmitted in the network. Typical modulations used in this type of network are binary phase-shift keying and quadrature phase-shift keying. The multiple access modes are the ones known as time division multiple access (TDMA), frequency division multiple access (FDMA), code division multiple access (CDMA), and demand assignment multiple access (DAMA), depending on the particularities of the network.


Applications of Computational Intelligence in Multi-Disciplinary Research

VSAT is a satellite system for home or business users, so by connecting the VSAT terminals to a terrestrial hub, a network can be created, at low costs, to provide communications only through this hub (STAR network configuration, Fig. 6.3). The satellite network subsystem is made out of a main antenna connected to the hub, a satellite, and numerous smaller (spaced between themselves) terminal antennas. At the customer’s location, the VSAT system includes a receiver (satellite router), low noise converter, frequency converter (block upconverter), and a parabolic antenna [17]. The terrestrial network subsystem consists of the links that lead from the user’s central block to the hub. This network made of specialized links has all the characteristics of a classic network and is configured accordingly. The hub converts the protocols and carries the necessary information flow to the recipient on the satellite channel, being the main element of the communications system, containing elements through which input/output (I/O) connections are created to and from the terminals (Fig. 6.4).

FIGURE 6.3 VSAT configuration (a monitoring center, DC, input/output points, RTU-remote terminal unit), illustrating the possibilities of communication with the public terrestrial telecommunications networks.

FIGURE 6.4 Explanation regarding the component elements of satellite communication (VSAT).

Estimation of the satellite bandwidth Chapter | 6


A hub may be passive (when performing the same function as a repeater [18]), smart (when it has built-in software to configure each port in the hub and to monitor the traffic passing through it), or switching (when reading the destination address of the packet and directs it only to the port indicated by the address). The terminal station comprises three elements: the antenna, the external radio unit, and the internal radio unit. In the case of the VSAT network, the antenna has an average diameter of upto 2 m (a diameter smaller than 1 m will reduce the data flow below 9600 bauds) [19]. The VSAT antennas are equipped with an external radio unit (located at the top of the antenna) that allows data transmission and reception. The internal radio unit can be a modulator/demodulator of the signal taken by the antenna and introduced into the computer system for information processing. The transceiver sends and receives signals from a transponder situated on a satellite. By transponder (TransmitterResponder) we understand the unit receivingbroadcasting from telecommunication equipment (this includes other components, including power, control, and cooling units); transponders are on satellites but also in some radio-relaying systems. The satellite network is traversed by two data streams (Fig. 6.5A): a data stream called outbound, whose direction is from the hub to the antenna, and a data stream called inbound, whose direction is from the antenna to the hub. Each of these data streams is at its turn composed of two different secondary streams (Fig. 6.5B): an upward data stream situated on the lifting channel (uplink), which links the hub to the satellite, and a downstream data stream on the descending channel (downlink), which connects the antennas to the hub. Different frequencies are used for lifting and lowering to prevent transponders from entering the oscillation. Therefore the VSAT network must be seen as a juxtaposition of two totally different unidirectional subnets: the first is an “outbound” network, going from the hub to the VSAT station, and the second is an “inbound” network from the stations to the hub. TDMA is a technique used in the topology of the star satellite network, where all remote stations are dynamically connected (in time division) via satellite to a central hub [20]. Two or more terminal locations can communicate with each other through the central hub, with double delay round trip (Fig. 6.6A). This network topology is extremely flexible and cost effective, being the ideal choice for the type of networks that, like SCADA networks, must transmit real-time information. VSAT terminals that are farther from the satellite, such as site 1 (Fig. 6.6B), must transmit earlier than site 2, which is closer to the satellite. A transmission operation is carried out in three stages. The first stage is the preparation phase, which consists of addressing all the stations to the list for transmitting and establishing a dialog with each of the places addressed and spaced between them. Once this phase is completed, the actual transmission phase can be carried out, in which the communication is carried out without any control over the information flow. In principle, the transmitter sends the data in block form, together with a control block containing the error detector code and a sequence number. Each receiver will treat the received blocks by checking the error detector code and storing the valid blocks. Because in the case of a single transmission, there will be a probability that a large number of stations will not receive the blocks correctly, this operation

FIGURE 6.5 Explanation of the flows and subflows of data transmitted by the satellite network.


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 6.6 Explanation regarding the transmission of information.

will be repeated several times. At each new retransmission, the receiving stations focus only on the blocks that lack information. The third phase consists of querying all the stations, one by one. In the case analyzed in this chapter, in all SCADA locations, the VSAT satellite communication system consists of router type indoor unit, low noise block downconverter (LNB) type outdoor unit, parabolic antenna (in most locations, as well as at the Main and Secondary Dispatchers), and RF cable and connectors [21,22]. Depending on the required bandwidth, VSAT systems are a bit bulky (antennas that are 12 m in diameter) and easy to install. VSAT terminals with very small antennas (diameter less than 1 m) are intended for small volume traffic.

6.3.3 Connectivity Connectivity characterizes the way the nodes of the communication network are connected. The way a system, a telecommunication network, provides connections between different users is called connectivity, and the primary forms of connectivity are point-to-point, point-to-multipoint, multipoint-to-point, and multipoint-tomultipoint connectivities. Obviously, the links must be bidirectional. Point-to-point connectivity ensures the connection between two fixed points and was the first system used but is no longer in use. Point-to-multipoint connectivity, broadcast type allows for unidirectional transmissions, from one station to many other stations that can only receive. Multipoint-to-point connectivity allows bidirectional connection from multiple stations to a single main station. Multipoint-to-multipoint connectivity is achieved when multiple stations share satellite resources for both transmitting and receiving (this technique is called multiple access). In a network, connectivity addresses both the level of services and resources on board. The connectivity of the services defines the necessary type of connections between the network equipment, the satellite terminals, and the access nodes for the provision of the requested services. The onboard connectivity defines how the network resources are shared on board to meet the service level. The onboard connectivity is provided by the following levels: G



beam: the entire frequency resource allocated to a beam is switched on board; it corresponds to one or more channels; channel: this is equivalent to a frequency resource transmitted through the transponder; carriers: they can be FDMA transmitted via satellite terminal or Earth station, or MF-TDMA (Medium Frequency TDMA—access TDMA on medium frequencies) shared by several satellite terminals; temporal location: corresponds to time division multiplexing (TDM) or TDMA; burst, packet, or cell: corresponds to any level 2 packet, upto the Internet Protocol datagram.

Estimation of the satellite bandwidth Chapter | 6


Depending on the processing capabilities on board and the network layer, the techniques used for coating interconnection are [23] transponder jump (if there is no processing possibility on board), switching on board (if there is transparency and regenerative processing), and beam sweeping. Onboard connectivity with transponder jump uses the splitting of the band allocated to the system in as many subbands as there are bundles. A filter bank situated on board separates the carriers from the subbands. Each filter is connected to a transponder and then to the destination beam antenna. The technologies that allow channel selection are analog (which uses an intermediate frequency switching matrix) and digital (which uses basic band processing, in particular, transparent processor). This technique can be used in practice only with digital transmissions and TDMA-access. Regarding the protocol stack, two main scenarios can be imagined: The first uses encapsulation at the protocol level, which is a simple approach that allows the use of an array of terminal types, with a variety of standards, but can also become inefficient due to additional loading at the packet level. At the same time, the use of standard components is a simple approach, and it should reduce the preparation time for launching the system as a whole. The DVB-S standard can be considered as a first example in this group. In fact, allowing the asynchronous transfer mode (ATM) and IP encapsulation on the stream of MPEG-2 transport, the DVB-S standard can be used on the direct link for data transport to the user terminals, while via the public switched telephone network link, the interactive aspect is obtained. The DVB-RCS standard specifies a terminal of satellite communications (sometimes known as Satellite Interactive Terminal) or a Return Channel Satellite Terminal that supports a two-way Digital Video Broadcast (DVB) link for satellite communications. In the case of the DVB-RCS approach, the user terminal receives a standard DVB-S stream from the satellite. At the same time, the DVB-RCS standard ensures the possibility of transmission from the user station to the satellite through the same antenna. The second approach from the perspective of the protocol stack is based on a solution that is integrated with the ATM protocol stack. In this case, there is the S-ATM layer, instead of the standard ATM layer, which introduces the necessary changes in order to address and solve specific aspects of the space segment, in the cell header and in its functions. Below the S-ATM layer, there is the medium access control layer, which can be based on MF-TDMA or CDMA, a DAMA protocol, and the physical radio layer.

6.3.4 Multiple access Multiple access means the ability of a large number of stations to connect at the same time to the same transponder. In satellite communications systems, multiple access is an essential problem, with this technique allowing the efficient exploitation of the satellite’s ability to cover very large areas, which influences the design and implementation of the systems, their flexibility, and costs. In the multiple access technique with frequency division, all users access the satellite at the same time, but each in its own frequency band (its own carrier); sometimes multiple carriers (bands) are allocated to the same high-capacity user. In the time-division multiple access technique, all users use the same frequency band (same carrier), but each transmits a certain time interval. The intermittent nature of the process makes it very suitable for digital modulated transmissions. In the multiple access technique with code splits, several stations simultaneously transmit orthogonally encoded signals with spread spectrum occupying the same frequency band. A station receives all signals, decodes them, and restores only the signal intended for them. The TDMA principle results from Fig. 6.7, where several ground stations transmit signals to the satellite in time-limited sequences. The satellite transmits the sequence in succession to all ground stations; obviously, the frequencies of the transmission and reception carriers are different. Synchronization is accomplished by establishing a reference ground station, which emits a time signal to the entire system; the sequences emitted by this station are a reference for all the stations. Only data signals are transmitted in TDMA systems. In order to control the intervals between the sequences, the transmissions are organized into frames. In each frame, sequences (groups, bursts) with different contents are transmitted. Each frame starts with one or two reference bursts, followed by traffic bursts from various ground stations; all bursts are separated by guard time intervals. All bursts include a starting sequence called a preamble, which is necessary for the synchronization and for the preparation of the receiver.


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 6.7 The principle of multiple access with time division.

The reference bursts contain only this preamble. The first sequence in the preamble is intended for bit synchronization. The following sequence consists of a 48-bit word (sequence) with good correlation properties, intended for group synchronization. From this moment, it is possible to proceed to the recognition of the transmitted data. The following sequences contain the control and service information required. Included are a telex sequence for exchanging the service information, a sequence with information for error analysis, and sequences for data traffic control (voice signals and data signals); finally, a sequence for controlling delays introduced by channel propagation is inserted. In traffic bursts, the preamble is followed by the traffic data. It is noticeable that the bursts come from different ground stations and also have as destination different stations. Useful signals are voice, video, data, fax, etc. A specific interface is provided for each signal type. Depending on the signal, the interfaces perform various functions. In the case of the analog signals, the sampling, coding, and possibly multiplexing in time of the channels are performed, at multiplexing and demultiplexing and restoration of the analog signal at the reception. In case of data signals, the storage and reading in the suitable tact according to the TDMA system are ensured at the transmission, respectively, the storage and retransmission in the tact requested by the user, at the reception. In the terminal itself, signals are transmitted (in and out) only in the form of data (bit strings). All the operations from the system, such as multiplexing and demultiplexing, establishing the length of the subbursts, and introducing bursts in the preamble, are monitored by a TDMA controller accessible from the operator console and/or by a remote control system. One of the most important issues in TDMA is the bit synchronization that must be performed by all the users of the network. The synchronization, at a ground station, is made in two phases: the acquisition of the phase upon entering the system and then the phase tracking while the station is receiving the bursts. Phase tracking is simplified while using the same transponder tracked with the same beam because a station receives all frames, both those intended for it and those for other stations. Thus a reaction loop is established: station—satellite—station; the station receives its own relayed satellite bursts for other stations. The satellite transponder is synchronized with the tact of the reference station (time base). A new station that enters the TDMA system must be synchronized with the tact of the system, which it uses for establishing a local reference. When a station enters the system, the communication begins with the acquisition phase. First, the station emits reference bursts, placed anywhere in its frames, which contains nothing else. The satellite relays the bursts with the reference tact used by the system, placing the bursts of the station somewhere in the frames. The station seeks its bursts, trying to determine their positions in frames, which generally do not coincide with those expected. For this, the station adjusts its tact phase according to estimates of propagation delays, until the error between the predicted position and the determined one is sufficiently small. From here on, it is passed on to the tracking phase, the station emitting traffic bursts (with synchronization preambles). Obviously, the more the propagation delays (due to the satellite’s motion) are more precisely predetermined, the faster the phase acquisition. In some cases, the satellite sends information about its position and changes of position, in real-time; they serve for quick purchase. In the case analyzed in this chapter, satellite communication services are provided by operators offering national coverage. The services offered are of the guaranteed service-level agreement (SLA) band type. This means that the performance parameters are established and guaranteed by contract (SLA) and determine the optimal operating conditions for the company’s applications, including availability (ensuring the transmission of data from SCADA locations to 2 Dispatchers), packet loss, delays (transit delay), and jitter [24].

Estimation of the satellite bandwidth Chapter | 6


In the following section, for sharing the available transmission capacity by different users, the MF-TDMA multipleaccess scheme was adopted. The DVB-RCS standard is designed to be independent of the band or frequency bands in that it does not specify which band or bands can be used, thus allowing the construction of a wide variety of systems. At present, it tends to use the Ka band for the uplink. The DVB-RCS standard also includes a number of security mechanisms.


Algorithm for estimating the satellite bandwidth

In this chapter, we propose a model based on an algorithm that estimates in proportion to 99% the bandwidth required for the safe operation and full functionality of the VSAT communications network of the SCADA system. The main steps of achieving this model are collecting for each RTU the digital and analog input data, knowing the message lengths and their transmission frequency, collecting the metering information, and determining the bandwidth for the query. These will be described in the following sections.

6.4.1 Determining the bandwidth required for data transmission For the SCADA system, the applied solution is so-called “Star”-Topology, which means that all data traffic is directed toward the VSAT central station [Global Operation Center (GOC)] which has the role of network administration and gateway for all the connected VSAT terminals. It is guaranteed by the Satellite Service Provider that a dedicated bandwidth for down- and upstream is permanently available. Both the downstream and the upstream bandwidth are shared between all the distant locations (terminals) in a dynamic way. Traffic management and handling are controlled by the GOC. Data transfer speed (v), or transfer speed, also called “bandwidth (transmission)” (from the English terms band and width), is the ratio of the amount of data transmitted (m) to the transmission time (t): m v5 (6.1) t The unit of measurement in the International System of Units for the transfer speed is 1 bit per second (1 bps). The most commonly used multiple of the bps unit is the kilobit per second (kbps), which equals 103 bps. The internal memory capacity of numeric equipment used in communication systems is measured in bytes (one byte represents the capacity required to store a character). The amount of data transmitted, or the message (m) between two central dispatchers and the locations on the ground, is distributed by the IEC 608070104 protocol, which is one of the IEC 60870 standards that define the systems used for telecontrol (SCADA) in electrical engineering applications and automation of electrical systems. IEC 608070104 was designed to standardize the communication between the SCADA system and the equipment of the ground stations. Aggregating multiple links increases the total effective bandwidth between two systems by using standard modems and lines of communication (Fig. 6.8). The ability to dynamically add and remove physical links allows a system to be configured in order to provide bandwidth only when needed. This approach is usually called on-demand bandwidth and allows paying for the additional bandwidth only when used [25]. There is a data traffic limit (called “session bandwidth”), which is divided between participants and can be chosen based on a cost or some prior information about the available session width. Often, the width is equal to the sum of the nominal widths of each active transmitter. It is also known that the more operations you perform at the same time, the greater the bandwidth required. The hub manages bandwidth, giving each terminal point in the communications network access to the communication channel. The basic SCADA protocols are Modbus, RP-570, and Conitel and are manufacturer dependent. Standard protocols are IEC 608705101 or 104, Profibus, and DNP3 and are standardized protocols recognized by most SCADA manufacturers. Many of these protocols contain extensions for Transmission Control Protocol/Internet Protocol (TCP/IP) operation; however, the security required in practice suggests avoiding internet connection to reduce the risk of attacks. The standard IEC 608705104 (also known as IEC 8705104) was launched in 2000 by the IEC (International Electrotechnical Commission). As it can be seen from the full name of the standard “Network access for IEC 608705101 using standard transport profiles,” its application layer is based on IEC 608705101. IEC


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 6.8 The main connections between RTUs and the equipment associated with a location.

608705104 allows communication between the control station and the substation through a TCP/IP standard. The TCP is used for the transmission of connection-oriented secure data [26]. IEC 608705104 limits the types of information and configuration parameters defined in IEC 608705101, which means that not all functions available in IEC 608705101 are supported by IEC 608705104. The packages have fixed length or variable length and contain an application service data unit (ASDU), also called telegram [27]. IEC 608705 has information on the information objects that are suitable for SCADA applications. Each type of data has a unique identification number. The types of information objects are grouped according to the direction (the monitoring or control direction) and the information type (process information, system information, parameter, file transfer). The ASDU format contains two main sections: the data unit identifier (with fixed length) and the data. The data unit identifier defines the specific type of data, provides addresses to identify the specific identity of the data, and includes additional information as a result of the transmission. Each ASDU can transmit upto 127 objects. Each informational object is addressed by the object address (IOA) that identifies the specific data of a defined station. For each ASDU type defined, the IEC 104 standard defines the format of the information object that is, what information elements form such an object, and how they are structured. Some information objects contain several information elements. This object contains three informational elements: normalized value (2 bytes), descriptor quality (1 byte), and binary timestamp (3 bytes). For this type of object, the transmission is spontaneous (code 3) or requested (code 5). Information items are building blocks used to convey information. The format and length of each information element differ and are given by the standard. The standard describes how the coded values are interpreted. The biggest advantage of IEC 608705104 is that it allows communication through a standard network, which allows the simultaneous transmission of data between multiple devices and services. The problems that remain to be resolved are the definition of communication with redundant systems or networks and, with the use of the Internet, data encryption. In the case analyzed in this chapter, the communication between the central dispatcher and the locations on the ground is carried out by the IEC 608705104 protocol, encapsulated in the TCP protocol, which means that the message is encoded in an IEC 608705104 format and then in TCP format. Therefore the length of the concerned message consists of summing the lengths of IEC 608705104 and TCP formats. It is known that TCP is connection-oriented, meaning that the handshake method is used to initiate communication between systems. A data stream sent over a TCP connection is delivered securely and in order to the destination. Transmission is done safely through the number sequences and the confirmation of their receipt. The main features of TCP are the following [28]:

Estimation of the satellite bandwidth Chapter | 6







Security: TCP manages the acknowledgments, retransmissions, and time-outs. There are several attempts to deliver the message. If something gets lost on the way, the server will ask for the lost part again. There is no middle ground: either there are no missing packets or the connection is interrupted. Orders: if two messages are sent in succession, they will be received in the order in which they were sent. If the packets arrive in a different order, TCP stores the disordered data until all packets arrive and then reorders and delivers them to the application. Streaming: data is read as a byte stream; there are no indicators that show the boundaries of a segment. TCP sends three packets to initiate a connection to a socket. Only then can it start sending data. TCP also deals with reliability and congestion control. The structure of a TCP segment consists of data bits to be sent and a header that is added to the data. The header of a TCP segment can range from 20 to 60 bits, of which 40 bits are for options. If there are no options, the header is 20 bits; otherwise, it can be upto 60 bits. The Internet Control Message Protocol (ICMP) is used by the equipment in a network to report to the source the problems that occurred during the transmission of a message. Messages are sent using the basic IP header [29]. The first byte of the data section of the datagram is an ICMP field; the value of this field determines the format of the rest of the data. Any field called “unused” is reserved for future extensions and must be set to zero at the time of referral, but the recipients should not use these fields (unless included in the check amount). In the case analyzed in this chapter, the TCP encapsulation of a message consists of a header, followed by the actual data to be transmitted, before adding a minimum length of 20 bits. The maximum length of an IEC 608705104 message is 120 bits. If the communication partners have more information to transmit, then more messages are transmitted. Using IEC 608705104 (like TCP) has the advantage that no message is lost without the knowledge of both communication partners. How each message must receive a confirmation message is defined by the deadline for transmission of the communication, according to the standard: the minimum time for establishing a connection (t0) is 90 seconds; the minimum time for sending or testing an IEC 608705104 message (t1) is 45 seconds, and for confirmation (t2), if there are no messages, it is 30 seconds (t1 , t1); the minimum time for transmitting the test frames (t3), in the case of a prolonged inactive state, is 170 seconds (RTU) and respectively 180 seconds (DC-monitoring center).

Thus if a partner does not receive the confirmation message before the related deadline expires, as a feedback to their message, it is assumed that the connection is broken and needs to be restored. The reestablishment of the connection with an RTU is done through a procedure known as the station query. Each RTU is connected to each of the 2 CDs (central dispatchers), but only one is active (Main CD) and the other is passive (Secondary CD—its availability being monitored through a heartbeat message issued by the central hub), meaning the connection is formed and periodically checked with heartbeat messages (it is a message sent from an initiator to a destination that allows the destination to identify if and when the initiator fails or is no longer available), which are not relevant for estimating the bandwidth of the satellite. After the active connection with an RTU and the station query procedure, only the changes regarding the process status are communicated to the control stations and therefore the communication transmission times (listed above) are not relevant for estimating the bandwidth of the satellite. In order to allow the control center to have a real representation of the gas transmission system, it initiates the collection for each RTU of the numerical (DI) and analog (AI) data. The same procedure applies if the active links are interrupted and the passive links are activated, which means that the loss of the active link will undoubtedly lead to increased traffic. For collecting the metering information (CI), a query command is initiated from the control stations, or based on the RTU clock, in which every 3 minutes the metering values are spontaneously transmitted from RTU to the control stations, starting with the second 0 and upto the second 20. In order to approximate the bandwidth required for satellite as well as terrestrial communication, it is necessary to know the lengths of the messages and the frequency of their transmission. The communication between the dispatchers (DC) or control stations and the locations on the ground (RTU or local stations) is asymmetric. The RTUs collect the process data and send it to the control stations, generating much higher traffic than the data arriving from the control stations to the RTUs.


Applications of Computational Intelligence in Multi-Disciplinary Research

The communication within a station (RTU receives data through both acquisition and communication with other devices of the same local stations) is not relevant for satellite communication. The communication in the satellite network is done through frames having the format of 186 bits [30]. In a data container of 186 bytes specific to the satellite, the connection can be entered theoretically (1866)/17 5 10 measured values of type M_SP_TB_1 and (1866)/21 5 8 measured values of type M_ME_TF_1, without taking into account the TCP headers. If the message content is less than 186 bits (bytes), the empty space is filled in with zero. The bandwidth used for communication will be a multiple of 186 bits/second.

6.4.2 Case study In the case analyzed in this chapter, it is considered that the total number of locations is 41,220, and its distribution by data type is being illustrated in Table 6.2. where DI: digital input (such as valve status, open/closed, expressed in bits); AI: analog input (measured value of an analog size, such as pressure, temperature, flow); CI: counting information, which represents the value of a metering; DO: digital output (digital signal of RTU output, such as valve actuation control, closing/opening, expressed in bits); AO: analog output (analog output signal from RTU, such as operating point setting for a pressure regulator). Transmitting a message to the control stations is an event that occurs spontaneously and randomly, which is only performed when changes in the state of the monitored process occur. Therefore a modification event leads to a message occupying the satellite bandwidth. The exchange rate (experimental values compared to the values measured during the implementation of the SCADA system) is based on the type of information and is calculated as follows: Number of all digital inputs ðDIÞ 3 0:1%=s;


Number of all analog inputsðAIÞ 3 1:5%=s; For the measured values given in Table 6.2 and the given exchange rate, the bandwidth for the satellite network was calculated (composed of the satellite subsystem, Table 6.3, and the terrestrial subsystem, Table 6.4). It turns out that a significant bandwidth of the satellite is consumed, due to the character of the protocols used to transmit a TCP bandwidth consisting of very short messages. The counting query is performed at fixed intervals of 3 minutes, in which case, for each metering, a metering value is communicated that reflects the state. The metering values are stored in an IEC 608795104 message, having the maximum ASDU size. The bandwidth is estimated in Table 6.5 so that the procedure can be completed in 20 seconds. TABLE 6.2 Data information that reflects the status for one month of operation. DI












TABLE 6.3 Estimation of bandwidth in the satellite subsystem. Data information (Tab. 2)

Exchange rate





8 [%/s]



1,5 [%]

450 [%/s]

Total (DI 1 AI)

458 [%/s]

Dimension/frame length

186 [bytes]

Satellite bandwidth

458 3 186 5 85188 [bytes/s]

Satellite bandwidth

85188:1024 5 83,191 3 8 5 665,53  666[kbps]

Estimation of the satellite bandwidth Chapter | 6


TABLE 6.4 Estimation of bandwidth in the terrestrial subsystem. Message type IEC 60570104

Dimension ASDU

Total bytes IEC 608795104

Total bytes TCP (added with bytes IEC 608795104)


17 [bytes]

17 3 8 5 136 [bytes/s]

136 1 20 3 8 5 296 [bytes/s]


21 [bytes]

21 3 450 5 9450 [bytes/s]

9450 1 20 3 450 5 18450 [bytes/s]

Terrestrial bandwidth (TCP)

296 3 18450 5 18746 [bytes/s]

Terrestrial bandwidth (TCP)

18746:1024 5 18.3 3 8 5 146,45  146[kBps]

Following the detection of communication errors, the station query is performed, but only for the relevant stations. In order to establish a new, active communication link, type IEC 60570104, it will first establish the connection (it is implicitly passive), after which the control station will send a command to activate the connection, the time from which the station will send useful messages for process monitoring and counting query. Once the connection is activated, the station query is performed. So the bandwidth, in addition to the other types of traffic mentioned above, must also include the station query. The station query process is common to all the stations that make up the communication system. This process occurs when a control station fails, or when a satellite terminal in a control center fails or cannot connect to the satellite due to adverse weather conditions and therefore all redundant connections must be activated. Therefore in determining the total bandwidth, a bandwidth for querying with predefined information must also be taken into account, as there are situations when the station is required to query (Table 6.6). The required TCP band is calculated as the sum of the product of the number of IEC-104 messages transmitted with the size of the associated ASDU units (useful data) and the product of the number of messages with the header size (20 bytes) added by the protocol to each data packet. The bandwidth for the entire satellite communication process (consisting of monitoring processes, counting queries, and station queries) results by summing up the three simultaneous, parallel, and independent processes (Table 6.7). According to the estimations from the previous tables, it results that a minimum bandwidth of 750 kbps is needed for the satellite subsystem, and 207 kbps for the terrestrial subsystem, which justifies and recommends a bandwidth of 1024 kbps, meaning 1 Mbps for the whole satellite communication process. Therefore a guaranteed bandwidth requirement of at least 1 Mbps is estimated for all locations, the communication supporting the international standard IEC 60570104. The necessary satellite bandwidth that the system needs in order to operate properly includes all three communication channels: principal dispatcher to satellite, secondary dispatcher to satellite, and local stations (RTUs) to satellite.

6.4.3 Overview of some recent algorithms in detail The following parameters are taken into consideration for the calculation of the bandwidth [31,32]: Payload: the part of the transmitted data that is the actual intended information. The term is used to describe the overhead data (headers, frames, protocol conversion) needed to transport the data from the field over the VSAT network. This parameter expresses the relation between TCP/IP data packets and the “VSAT” data frame. The classic calculation based on the I/O/ (modules, Modbus, system) on SCADA is presented in Table 6.8. On field formula:     C 8 5 ððC5 3 C4Þ 3 ð60 1 19 1 54 1 6ÞÞ=3600=1024 3 8 1 ð$B$6 3 C4 3 ð60 1 19 1 54 1 6ÞÞ=3600=1024 3 8   Part 1: 5 . ððC5 3 C4Þ 3 ð60 1 19 1 54 1 6ÞÞ=3600=1024 3 8 (6.3) For transforming the spontaneous update of information points (IPs) to kbit/s: 1. Number of IPs multiplied by updates per hour (C5 3 C4) 2. The TCP-IP header 1 the information point presented in bytes 5 . 60 bytes (TCP-IP header) 1 19 bytes (IEC-104 average telegram length) 1 54 bytes (TCP-IP header) 16 bytes (IEC-104 telegram acknowledgment) 3. To convert from updates per hour to updates per second 5 divide by 3600

TABLE 6.5 Estimating bandwidth for counting queries. Estimation of bandwidth in the satellite subsystem

Estimation of bandwidth in the terrestrial subsystem

Data information (Tab. 1)

Maximum number of counts (CI), in the message IEC 60570104

Number of messages [counts/s]

Message type IEC 60570104

Dimension ASDU [bytes]

Total bytes IEC 608795104 [bytes/s]

Total bytes TCP (added with bytes IEC 608795104) [bytes/s]



396:5  80



81 3 80 5 6480

80 3 20 1 6480 55 8080



Satellite [bytes/20 s]

186 3 80 55 14880

TCP [bytes/20 s]


Band of the satellite subsystem [bytes/s]

14880:20 55 744

Terrestrial bandwidth, TCP [bytes/s]

8080:20 5 404

Satellite bandwidth [kbps]

744:1024 3 3 8  9

Terrestrial bandwidth, TCP [kBps]

404:1024 3 8 55 3.12  3

TABLE 6.6 Estimation of bandwidth for the station query. Estimation of bandwidth in the satellite subsystem Data information (Tab. 2)


Maximum number of information given in the message IEC 60570104

Number of messages [counts/s]

Estimation of bandwidth in the terrestrial subsystem Message type IEC 60570104

Dimension ASDU [bytes]

Total bytes IEC 608795104 [bytes/s]

Total bytes TCP (added with bytes IEC 608795104) [bytes/s]




7560:28  270



118 3 270 55 31860

270 3 20 1 31860 5 37260




30000:14 5 2142,85  2143



118 3 2143 55 252874

2143 3 20 1 252874 5 295734



Frame dimension [bytes]


Satellite [bytes/45 s]

188 3 2413 55 453644

TCP [bytes/45 s]

3726011295734 55 332994

Satellite subsystem band [bytes/s]

453644::445   10081

Terrestrial bandwidth, TCP [bytes/s]

332994:45 5 7399,86   7400

Satellite bandwidth [kbps]

10081::1024 3 8 5 78,75   79

Terrestrial bandwidth, TCP [kBps]

7400:1024 3 8 55 57,81  58


Applications of Computational Intelligence in Multi-Disciplinary Research

TABLE 6.7 Estimation of bandwidth for the entire satellite communication process. Process type

Bandwidth in the satellite subsystem (kBps)

Bandwidth in the terrestrial subsystem (kBps)

Monitoring process



Counting query process



Station query process



Bandwidth for the entire communication process



4. To convert from bytes to bits 5 multiply by 8 5. To convert from bits to Kilobits 5 divide by 1024   Part 2: 5 . ð$B$6 3 C4 3 ð60 1 19 1 54 1 6ÞÞ=3600=1024 3 8


For adding the average amount of data transmitted due to general interrogation scan: 1. Average general interrogation scans per hour ($B$6 3 C4); the value is based on experience and measurement during implementation, testing, and operating the tricare global remote overseas (TGRO) SCADA system. 2. The TCP-IP header 1 the information point presented in bytes 5 . 60 bytes (TCP-IP header) 1 19 bytes (IEC-104 average telegram length) 1 54 bytes (TCP-IP header) 1 6 bytes (IEC-104 telegram acknowledgment) 3. To convert from updates per hour to updates per second 5 divide by 3600 4. To convert from bytes to bits 5 multiply by 8 5. To convert from bits to Kilobits 5 divide by 1024 The fields D8, F8, E8, G8 are calculated in the same way. Field B8 is the sum of C8, D8, F8, E8, G8, and the traffic on the second (not active 5 passive) VSAT channel to the second control center. The traffic on the passive channel is based on the T3 parameter of the IEC-60870104 protocol (time-out for sending test frames in case of a long idle state) 5 180 seconds for the control center. The above calculation shows the absolute minimum for pure TCP/IP-based networks and normal operation (minimum kbit/second). For a burst operation of the SCADA system under bad weather conditions, in order to guarantee the 99.99% availability, 2/3 buffer [required Bandwidth (BW) kbit/second] needs to be provided. Generally, the minimum bandwidth is calculated for the system under perfect external conditions; this means that the weather does not have any influence. In bad weather conditions, we have packet retransmissions, higher failure error correction on VSAT, worst modulation/demodulation factor, lower signal-to-noise ratio, and so on. All these factors lead to the fact that fewer data can be transmitted via satellite connection. In conclusion, the bandwidth needs to be calculated for the worst-case scenario. As an example, we present the calculations for the satellite network (gross BW kbit/second). Since the satellite system is not a pure IP packet distribution system, but the transmission is based on MF-TDMA (technology for dynamically sharing bandwidth resources in an over-the-air, two-way communications network), bursts are based on container slots with a capacity of B184 bytes. Some of the stations are sending smaller packages compared to the container slot, and in order to transmit the data, the remotes would need only one burst [33]. Other remotes have larger packages and would need two or more burst slots to transmit their data. Because of the fragmentation and remaining null packet filling per container, the real needed or the real used bandwidth related to the network is much higher there. In the current packet length distribution, considering that each DVB-RCS (satellite) burst supports a 177-byte payload (the rest of the bytes are for packet management) and the burst is not delayed, most of the RCS bursts are being sent with more than 70% of the payload empty. During the switchover between both control centers or after a connection break in one of them, there is a burst of data (general interrogation scan) that needs to be transmitted. In such cases, the fragmentation decreases and the payload increases to 70%. Assuming the fact that part of the bandwidth is also used for monitoring and management from the satellite hub, we can consider that there are gross and net bandwidths for the satellite network [34]. The net bandwidth is the one calculated for pure TCP/IP-based network 5 . Required Bandwidth kbit/second. The gross bandwidth is the real satellite bandwidth required for the net bandwidth. Due to the above-explained packet distribution, the required gross bandwidth needs to be almost the doubled net bandwidth.

TABLE 6.8 The calculation of the bandwidth based on the number of I/Os for 906 RTUs. 1



Calculation based on current I/O/ the I/O current on SCADA P

3 4






Update IPs/h


General interrogation/h



Test frames/h



Minimum BW (bandwidth) kbit/s



Required BW kbit/s



Gross BW kbit/s




























Applications of Computational Intelligence in Multi-Disciplinary Research

In conclusion, to fulfill the requirement for reaction time, and mainly for a system stability and availability of 99.99%, a bandwidth of 1182 kbps is required according to the calculations. First scenario: G

Bad weather conditions on one of the regions with 90 VSAT connections: the VSAT connection, depending on the current satellite signal strength, will drop off and will come back again during the time of thunderstorms or heavy rain. The system will reconnect the affected terminals many times during the time of bad weather conditions. Each reconnection of the VSAT channel causes a general interrogation scan. For the average calculation, 10% of the IPs of the whole system (IPs of those 90 affected VSAT connections) will be interrogated on each reconnection. Second scenario:


Bad weather conditions on the principal region (where the control center is located): the VSAT connection, depending on the current satellite signal strength, will be interrupted from time to time during the thunderstorms or heavy rain. The system (means also the VSAT terminal in the Principal region) will reconnect many times during the time of bad weather conditions. Each reconnection of the VSAT channel of the terminals on the Principal region causes a general interrogation scan. The system will switch all of the terminals that are connected to the Principal, to the control center on Secondary. All of the IPs for those terminals (IPs of all switched connections) will be interrogated. Third scenario (worst case):


Bad weather conditions on Principal and Secondary regions (both control centers there): the VSAT connection, depending on the current satellite signal strength, will drop off and will come back again during the time of thunderstorms or heavy rain. Due to the fact that both VSAT connections to the control centers are affected, all of the IPs of the whole system will be interrogated on each reconnection or switching of the channel between Principal and Secondary or generally by switching between GPRS and VSAT.

6.4.4 Validation of bandwidth calculations The graphs in Fig. 6.9, taken from the internal management software hub, show the traffic over the 3-month period that traveled the satellite network and confirm that the results obtained above are correct. The bandwidth required for the entire communications system is sized to operate in a “worst-case scenario” (under adverse weather conditions or when manually disconnecting from the primary server and connecting to the secondary server), and not just under normal operation conditions. Such a “worst-case scenario” is one in which all VSAT terminals connected through the primary server must be connected to the secondary server. The information communicated by the VSAT provider (Fig. 6.9) shows the maximum and average values of the traffic generated by the SCADA system over a period of 3 months (considered normal operating months) and that these data illustrate that, over the monitoring period, there were daily peaks where the bandwidth allocation actually used far exceeded the bandwidth of 512 kbps. Therefore the increase of the data traffic, caused by the individual terminals, required the increase of the overall bandwidth. If a bandwidth of the value estimated above is not used, then as traffic increases, malfunctions and errors will occur: G G G

invalidation of a high number of metering values; connections with individual stations will be lost, and their reconnection will consume additional bandwidth; as a result of the above symptom, inflationary effects may occur, which ultimately lead to the loss of all communication links with all stations.

By increasing the number of I/O points in the SCADA system, it was necessary to increase the bandwidth as well, in order to ensure the collection and transmission of data from all newly introduced points. Therefore in order to cover the data traffic surplus caused by increasing the number of I/O points from 16,220 to 41,220, the bandwidth had to be increased from 512 to 1024 kbps. It is also necessary to install, configure, parameterize, and provide the necessary software for the operation of an additional 25,000 I/O points. Also, the extension with two software licenses, both for the main dispatcher server and for the secondary dispatcher server, is necessary to cover the data model up to a maximum of 50,000 process points. The main elements of the SCADA system are the monitoring, control, and acquisition of data through the RTUs and the remote control of the various elements. Both hardware and software elements are required in order to perform these functions.

Estimation of the satellite bandwidth Chapter | 6

FIGURE 6.9 Traffic peaks in the satellite network (A and C) and the medium traffic in the satellite network over 3 months.



Applications of Computational Intelligence in Multi-Disciplinary Research

The VSAT communication system is the main one and, once the communication with the satellite is restored, the system automatically switches from GPRS to satellite [35]. The redundant GPRS system does not work in parallel with the VSAT system, so it cannot partially take over some of the traffic when the data traffic is high. The GPRS and VSAT systems are alternative. It is not possible to simulate in real life a worst-case scenario, where the communication is lost at the same time with the DCs and the local stations (RTU). Generally, in normal operation conditions, the local stations will transmit only those values that changed since the last transmission (i.e. report by exception), so the data traffic is relatively low. But when the communication is lost with one of the local stations (RTUs) the DC sends a general interrogation to the station, and the station will transmit all available data (but only the current data, not the historical data), which generates a higher traffic value. If there is not sufficient bandwidth for allowing the DC to receive the response in the required time, this will generate a new general interrogation (and potentially an avalanche of general interrogations) that could cause bottleneck situations.


Challenges and future work

The IEC 608705104 protocol specifies the format of the application protocol data unit (APDU) variable-length telegram containing a 6-byte Application Protocol Control Information header and an ASDU content that may contain one or more data objects (e.g., one or more measured values). The idea of grouping several measured values in a 186-byte data container specific to the satellite connection could appear [36,37]. This would only be suitable for determining the bandwidth below the satellite subnetwork (Fig. 6.10), and without taking into account the bandwidth required to query the stations. Thus in a data container of 186 bytes specific to the satellite connection can be entered theoretically (1866)/17 5 10 measured values of type M_SP_TB_1 and (1866)/21 5 8 measured values of type M_ME_TF_1, without taking into account the TCP headers. Considering the most unfavorable case, it results that by grouping several messages in the same container, the required bandwidth calculated above (665.5 kbit/second) could theoretically be reduced 8 times. Regarding the possibility of temporarily retaining some values for a later transmission in the group, it must be considered whether there is a possibility that the measured values were retained for a period (and not transmitted as soon as they became available) and what that period was. The SCADA system has a data recovery mechanism (including measured values). The existing RTU equipment at locations permanently store the values measured by the process and/or status and cyclically in their internal memory, so that, upon a breakdown of the telecommunications system, after the service is restored, they are automatically retransmitted to the central point system servers.

FIGURE 6.10 High-level architecture of the supervisory control and data acquisition system.

Estimation of the satellite bandwidth Chapter | 6


Regarding the storage capacity of the data recovery mechanism, it is limited by the physical memory available in the RTU equipment, that is the data/values being stored cyclically. In our real use case, the maximum number of measured values per telegram is fixed and cannot be increased. The IEC-104 is a so-called “report by exception” protocol, which means that any (measured) value change that is above a set threshold is transmitted in real-time, meaning that, in line with the IEC-104 standard, the RTU is not waiting to fill up the telegram to its maximum possible length before it is transmitted to the central SCADA. The same is of course applicable for any other type of signal such as binary signals or counter values. If it is considered that the messages are generated from about 40,000 points and are routed through local networks to about 1000 stations; in the case of a uniform distribution of points, a station corresponds to about 40 points, and the grouping of data could be performed only at the station level. Therefore it results that at a message rate of 1.5% of the total per second, a 40-point station will have to transmit about 0.6 messages/second (roughly 1 message every 2 seconds), which can be framed without problems in the 186-byte satellite container. It would result in an overall system total of 186 3 1000 5 186,000 bytes/2 seconds (approximately 726 kbits/second). The above estimations do not use exact figures. They do not have a quantitative value, but rather a qualitative one, to provide an order of magnitude and to show that the grouping of the ASDU objects in APDU-type containers is not feasible; the values needing to be sent one by one in order to not cause delays. Therefore in the future, we want to generalize the algorithm described above for other modern types of satellite telecommunications that use different communication protocols.



Often, during the execution of a SCADA project, there is a need to increase the number of I/O points from a certain value to a much larger one, which implies the purchase of additional software necessary for additional I/O points, but also the calculation of bandwidth for data transmission. SCADA systems are usually scalable and can support the extension of an additional number of I/O points as a result of the introduction of an additional number of RTUs. In case all the terminals are switched to one of the two command centers, the bandwidth will increase due to the fact that the system must issue general queries to all switched stations (e.g., from the Main Dispatcher to the Secondary). In the analyzed case, the increase of the VSAT bandwidth from 512 to 1024 kbps is due to the fact that in the VSAT satellite communication system the bandwidth is shared between all the terminals included in the VSAT network. The results of the monitoring carried out over a period of 3 months, as provided by the satellite communications service provider, which include both traffic peaks and the average traffic, validate the described algorithm. The algorithm, presented in the form of calculation tables, estimates to a fairly precise extent the satellite bandwidth in case of the introduction of additional local locations in the SCADA system. It takes into account the type of connectivity and access and is of real use when the satellite services serve very large spaces (national territory), in which there are no limitations of the last kilometer type. The reliability of satellite communications is recognized in cases of natural disasters or terrorism (the lifetime of the satellites can reach 18 years or more).

References ˇ [1] I. Ivankovi´c, D. Peharda, D. Novosel, K. Zubrini´ c-Kostovi´c, A. Kekelj, Smart grid substation equipment maintenance management functionality based on control center SCADA data, Journal of Energy 67 (3) (2018) 3035. [2] R. Amoah, S. Camtepe, E. Foo, Securing DNP3 broadcast communications in SCADA systems, IEEE Transactions on Industrial Informatics 12 (4) (2016) 14741485. [3] P. Bibik, S. Gradolewski, W. Zawi´slak, J. Zbudniewek, Problems of detecting unauthorized satellite transmissions from the VSAT terminalsIn Military Communications and Information Technology: A Trusted Cooperation Enabler 1 (2012) 431439. [4] F. Teschl, M. Binder, L. Luini, Ka-band signal statistics derived from a commercial VSAT, in: 11th International Symposium on Communication Systems, Networks & Digital Signal Processing, Budapest, Hungary, 2018, pp. 15. [5] M.C. Popescu, M.M. Balas, Thermal consumptions: control and monitoring, in: 3rd International Workshop on Soft Computing Applications, Szeged-Hungary-Arad-Romania, July. In Proceedings IEEE Catalog Number CFP0928D-PRT, Library of Congress 2009907136, 2009, pp. 8591. [6] B. Zhu, A. Joseph, S. Sastry, A taxonomy of cyber attacks on SCADA systems, in: International Conference on Internet of Things and 4th International Conference on Cyber, Physical and Social Computing, 2011, pp. 380388. [7] B. Qiu, H.B. Gooi, Y. Liu, E.K. Chan, Internet-based SCADA display system, IEEE Computer Applications in Power 15 (1) (2002) 1419. [8] M.S. Thomas, J. Douglas McDonald, Power System SCADA and Smart Grids, first ed., Imprint CRC Press, Boca Raton, 2015.


Applications of Computational Intelligence in Multi-Disciplinary Research

[9] A. Rezai, P. Keshavarzi, Z. Moravej, Key management issue in SCADA networks: a review, Engineering Science and Technology, an International Journal 20 (1) (2017) 354363. [10] A.S. Dobakhshari, S. Azizi, M. Paolone, V. Terzija, Ultra-fast linear state estimation utilizing SCADA measurements, IEEE Transactions on Power Systems 34 (4) (2019) 26222631. [11] J. Buet, Ph. Durance, VSAT, Les re´seaux satellites d’entreprise. Applications et se´curite´ (Ed.), Masson, Paris, 1996. [12] L.J. Ippolito, Satellite Communications Systems Engineering, Wiley Online Library, 2017. [13] J. Cantillo, J. Lacan, I. Buret, Cross-layer enhancement of error control techniques for adaptation layers of DVB satellites, International Journal of Satellite Communications and Networking 24 (2006) 579590. [14] B. Perrot, Satellite transmission, reception, and onboard processing, signaling, and switching, Handbook of Satellite Applications, Springer, 2017, pp. 497510. [15] R. Rinaldo, M.A. Vazquez-Castro, A. Morello, Capacity analysis and system optimization for the forward link of multi-beam satellite broadband systems exploiting adaptive coding and modulation, International Journal of Satellite Communications and Networking 22 (2004) 401423. [16] L. Stewart, D.A. Hayes, M. Welzl, A. Petlund, Multimedia-unfriendly TCP congestion control and home gateway queue management, in: Proceedings of the Second Annual ACM Conference on Multimedia Systems (MMSys’11, February), 2011, pp. 3544. [17] S. Ushaa, N. Sangeetha, E. Murugavalli, P.G.S. Velmurugan, M.N. Suresh, Spectrum sensing in a VSAT network using cognitive radio technology, in: 1st International Conference on Innovations in Information and Communication Technology, Chennai, India, 2019. [18] M.C. Popescu, Utilisation des ordinateurs, Universitaria Craiova, 2004. [19] C. Carciofi, P. Grazioso, V. Petrini, E. Spina, D. Massimi, G. De Sipio, et al. Co-channel and adjacent-channel coexistence between LTE-TDD and VSAT DVB-S in C-band: experimental campaign on consumer VSAT receivers, in: IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications, 2018, pp. 524539. [20] H. Abumarshoud, H. Hass, Index Time Division Multiple Access (I-TDMA) for LiFi systems, in: IEEE 89th Vehicular Technology Conference (VTC2019-Spring), 2019. [21] M. Popescu, A. Gabor, A. Naaji, Numerical simulation and traffic analysis in carrier voice networks, WSEAS Transactions on Communications 15 (2016) 323336. [22] M.C. Popescu, Report of Expertise (in Romanian), Craiova, 2019. [23] G. Maral, M. Bousquet, Satellite Communications Systems: Systems, Techniques and Technology, fifth ed., Wiley, 2011. [24] G. Epiphaniou, C. Maple, P. Sant, M. Reeve, Effects of queuing mechanisms on RTP traffic: comparative analysis of jitter, end-to-end delay and packet loss, in: ARES’10 International Conference on Availability, Reliability, and Security, 2010, pp. 3340. [25] R. Herrero, M. Cadirola, Effect of FEC mechanisms in the performance of low bit rate codecs in lossy mobile environments, in: Proceedings of the Conference on Principles, Systems and Applications of IP Telecommunications (IPTComm’14), 2014, pp. 16. [26] K. Satoda, K. Nihei, H. Yoshida, Quality evaluation of voice over multiple TCP connections, in: International Conference on Computing, Networking and Communications (ICNC), 2014, pp. 141146. [27] Z. Jiazhi, X. Jie, W. Yue, L. Yichao, X. Neng, Service unit based network architecture, in: Proceedings of the Fourth International Conference on Parallel and Distributed Computing, Applications and Technologies, Chengdu, China, 2003, pp. 488491. [28] K.C. Leung, V. Li, D. Yang, An overview of packet reordering in transmission control protocol (TCP): problems, solutions, and challengesIn IEEE Transactions on Parallel and Distributed Systems 18 (4) (2007) 522535. [29] H. Stubing, Multilayered Security and Privacy Protection in Car-to-X Networks., first ed., Springer, Wiesbaden, 2013, p. 29. [30] M. Alves, E. Tovar, F. Vasques, Evaluating the duration of message transactions in broadcast wired/wireless fieldbus networks, in: Proceedings 27th Annual Conference of the IEEE Industrial Electronics Society (IECON’01), 2001, pp. 243248. [31] Z. Kong, B. Tang, L. Deng, W. Liu, Y. Han, Condition monitoring of wind turbines based on spatio-temporal fusion of SCADA data by convolutional neural networks and gated recurrent units, Renewable Energy 146 (2020) 760768. [32] L.O. Aghenta, M.T. Iqbal, Low-cost, open source IoT-based SCADA system design using Thinger.IO and ESP32 thing, Electronics 8 (8) (2019) 1824. 822. [33] V. Singh, P.K. Upadhyay, M. Lin, On the performance of NOMA-assisted overlay multiuser cognitive satellite-terrestrial networks, IEEE Wireless Communications Letters 9 (5) (2020) 638642. [34] G. Maral, M. Bousquet, Z. Sun, Satellite Communications Systems: Systems, Techniques and Technology, John Wiley & Sons, 2020. [35] Q.I. Xiaogang, M.A. Jiulong, W.U. Dan, L.I.U. Lifang, H.U. Shaolin, A survey of routing techniques for satellite networks, Journal of Communications and Information Networks 1 (4) (2016) 6685. [36] S. Siamak, M. Dehghani, Dynamic GPS spoofing attack detection, localization, and measurement correction exploiting PMU and SCADA, IEEE Systems Journal (2020) 110. [37] A. Abbaspour, A. Sargolzaei, P. Forouzannezhad, K.K. Yen, A.I. Sarwat, Resilient control design for load frequency control system under false data injection attacks, IEEE Transactions on Industrial Electronics 67 (9) (2020) 79517962.

Chapter 7

Using artificial intelligence search in solving the camera placement problem Altahir A. Altahir, Vijanth S. Asirvadam, Nor Hisham B. Hamid and Patrick Sebastian Department of Electrical and Electronics Engineering, Universiti Technologi PETRONAS, Perak, Malaysia

Nomenclature F wI hI f r α hO p Ca Pa Va rs H L Cl K O M N cl,k,o,i,j


Covered area Width of the image sensor Height of the image sensor Focal length Effective range of the camera Horizontal angle Height of the object Satisfactory object height percentage Confined angle between Pa and Va Pitch angle Vertical angle of the field of view radius of the shadowy sector elevation of the camera Number of locations Previous satisfactory coverage which includes all the locations from l 5 1 upto l-1 Number of camera types Number of orientation angles Width of the map Height of the map Coverage instance given l, k, o, i, j.


The current advancements in sensor technologies and the pressing security needs as well as the impact of optimal camera placement on the efficiency and the cost of surveillance systems motivate researchers to introduce efficient methods for solving what is known as the camera placement problem. In general, the architecture of a visual surveillance system is comprised of six consecutive stages: video acquisition, preprocessing, segmentation, classification, tracking, and activity understanding. Video acquisition uses a video camera to acquire the image sequence [13]. The preprocessing step reduces the noise, illumination variations, and reflections of the light from the surrounding surfaces [48]. The segmentation step subtracts the moving objects from the image background [9]. Segmentation is usually followed by classifying the segmented objects, tracking the objects of interest, and analyzing the activities [1,1012]. Due to the nature of the tasks to be performed and to maintain the performance of surveillance systems, the implementation of the visual surveillance applications entails a real-time configuration. It is crucial that a proper camera arrangement (i.e., location, type, orientation, etc.) be established prior to the camera deployment. The planning of the camera placement increases the system’s efficiency and also reduces the installation and operational cost, as it is very costly to modify an existing deployment [13]. Applications of Computational Intelligence in Multi-Disciplinary Research. DOI: © 2022 Elsevier Inc. All rights reserved.



Applications of Computational Intelligence in Multi-Disciplinary Research

Given some monitoring quality measures coupled with the specifications of the visual sensors in hand, the goal of the camera placement framework is to capitalize the area seen by a set of visual sensors and/or reduce the cost of installing the system. The camera placement problem has been identified as a complex offline discrete optimization problem and known to have an NP-hard problem complexity. The complexity of the camera placement problem is regarded to the limitations of the camera capabilities, representation of the surrounding environment, application requirements, and the nondeterministic polynomial-time hardness nature of the camera placement problem in locating the true optimum in large state space [14]. In order to solve the camera placement problem, a crucial and fundamental step is modeling the coverage of the cameras in use. Moreover, proper settings of the camera arrays improve the quality of the subsequent postinstallation activities. The optimal placement settings will reduce the number of cameras required to achieve a similar or even higher level of the quality measure. This chapter refers to the activities of surveillance systems as preplacement and postplacement activities. The preplacement activities, which concern with mounting the visual surveillance sensors, receive less attention from the visual surveillance research community [15]. Early attempts of modeling a basic form of preplacement activities goes to the art gallery problem, which dedicated to solving security guard’s localization to monitor an art (i.e. an arbitrary-shaped polygon). [1624]. However, the art gallery problem and its variations lack a feasible representation of the guards’ capabilities, which makes it unrealistic to solve real-world camera placement problems [16]. The last two decades witnessed the actual emergence of the research area pertaining to the camera placement problem. In general, the objective of an optimal placement is to find, among all configurations, suitable settings which yield maximum visibility [25]. However, this goal varies slightly depending on application requirements in order to select the camera arrangements to satisfy predetermined coverage quality measures [15,26]. Many techniques have been proposed recently to locate the proper camera placement [19,24]. The modeling stage describes what the system can view, which reflects the fundamental necessity of any computer vision task [27]. The ideal case of the camera placement handles the positioning and the configuration of the camera in continuous space [18,19]. However, more recent research tends to represent the camera placement problem in a digitized space [28]. With the camera positions and poses represented in a discrete domain, the visual sensor model produces the camera coverage, which describes the covered area in a geometrical 2D or 3D space. The 2D representation of the coverage is broadly used in modeling the placement problem, as it offers the minimum computational cost in terms of memory and time, and provides high applicability compared to 3D modeling (i.e., a 2D map from CAD software or a satellite imaging application vs building a complete 3D model). Moreover, the 3D representation equips the representation model with the simplicity of the derivation and evaluation [18,24]. Many modeling and solutions techniques have been proposed to address the camera placement problem, for instance, art gallery [21], location set covering [29], and integer programming [18,30]. The subsequent optimization computations rely on proper camera coverage modeling. In particular, artificial intelligence search strategies are extensively used in solving discrete optimization problems. This chapter discusses the implementation of several artificial intelligence search strategies to optimally select the proper settings of the camera network. Most of these search formulations view the problem from a greedy-based perspective, and the solutions are usually attained by utilizing randomization restart settings. The chapter carries out an analytical review of three main searching algorithms namely, generate and test, uninformed search, and hill climbing search algorithms. Accordingly, the placement of the surveillance cameras is articulated as a maximization problem where searching algorithms are used to yield the settings that result in maximum coverage. The placement results obtained based on those algorithms are critically compared in terms of the algorithms’ efficiency as well as practical performance. Finally, the chapter highlights the strengths and weaknesses of each approach. This introductory section provides the readers with general ideas on the various visual surveillance applications and the importance of adopting efficient camera placement practices and concludes by highlighting the implementation of artificial intelligence search strategies to solve the camera placement problem.

7.1.1 The roles of visual surveillance systems Visual surveillance is one of the important security applications of a camera network where the camera nodes are distributed in order to monitor a particular area of interest [31,32]. The research interest in visual surveillance is guided by the assignment of the camera as a means to discover, verify, and assist in thwarting illegal endeavors [10,33]. A visual surveillance system aims to monitor, detect, track, and recognize certain objects from a sequence of images, or, in a general term, this can be depicted as describing and understanding human actions from video feeds [11,34,35]. In addition to the security role of video surveillance, the advancements in camera technology and enormous surveillance equipment that are up and running in the streets attracted researchers to develop new methods for automated

Using artificial intelligence search in solving the camera placement problem Chapter | 7


crowded scene analysis [36,37] and biometric recognition applications [38,39]. Thus the research on surveillance-based measurement consists of five classes, namely counting people [36,40], traffic monitoring [4143], face detection, recognition [4446], health monitoring [47], and detecting the occurrence of particular events based on real-time video feeds [48]. These advances in technology are now evolving on a multidimensional scale into an active research area that focuses on implementing vision-based measurement tasks under the umbrella of visual surveillance [4851]. Recently, surveillance systems have adopted a new role in assist in monitoring the imposed social distancing measures using video feeds in order to tackle the spread of the coronavirus [52]. With all these developments in video surveillance systems, researchers in the field of optimization and computational intelligence are required to introduce more sophisticated camera placement methods. However, regardless of the camera network application, deploying the cameras is one of the fundamental aspects which influences the efficiency as well as installation and running cost of the application [5355]. Therefore there is a need to adopt an efficient camera installation practice in order to expand the efficiency and decrease the mounting and running costs of surveillance systems [24,5660]. Usually, the selection of CCTV cameras depends on the nature of the activity to be observed as well as the nature of the objects to be monitored [34,61].

7.1.2 The camera placement problem from an artificial intelligence perspective Daily life is full of discrete decisions; it is rather a sequence of those decisions. Discrete decisions can be encountered when choosing to go out or stay at home, or solving the math homework or writing the physics paper. Usually, choosing a decision from a finite set of mutually exclusive choices involves calculating the cost, risk, or pleasure that might follow. Informally, such a calculation procedure seeks to maximize or minimize a form of cost function and is known as optimization. Based on the optimization objective function and parameters, several problems can broadly be classified as continuous and discrete optimization problems. Artificial intelligence search techniques are one of the highly used methods to solve discrete optimization problems. These techniques attempt to locate the solution via exploring the state space of the problem (i.e., the set of the available potential choices). Thus given the definition of the placement as a discrete optimization problem, the optimization method’s role is to look for the combination of settings that increases the portion of the area of interest visible to a set of cameras [62,63]. In particular, artificial intelligence searching strategies are used extensively in solving the camera placement problem due to the formulation’s simplicity and computational efficiency [18,60]. Thus a more in-depth analytical investigation of these search strategies used for solving the camera placement problem will improve the quality of the produced solutions. Typical visual sensor placement arrangements handle the sensor space uniformly to achieve the optimization objective [6466]. The solution techniques receive the coverage modeling outcomes and perform the optimization. Fig. 7.1 demonstrates the relation between the coverage table, which constructs the search space, and the optimization method and the optimization outcomes in a typical camera placement problem. As can be seen from Fig. 7.1, the

FIGURE 7.1 Obtaining the optimal camera placement outcomes based on the relation between the instances of the camera coverage and the optimization method.


Applications of Computational Intelligence in Multi-Disciplinary Research

optimization method requires forming the coverage table (i.e., all coverage instances per potential location). Then the optimization method operates on the coverage table in order to produce the optimal pose/location combinations.

7.1.3 Chapter description This chapter illustrates in detail the modeling, problem formulation, and solution to the researchers who desire to carry out research on the camera placement problem. The attempt here is to explain the various search strategies used for solving the camera placement problem. The chapter concentrates on three main approaches: generate and test, uninformed search, and hill climbing algorithms. These approaches are considered with a random restart setting. To examine these strategies, the chapter carried out an empirical study using two maps; the first map is an artificial map that describes a city complex, and the second map is a real-world map that shows a part of a university campus used as the sensor space with six and eight possible camera locations, respectively. Furthermore, the chapter models three commercial surveillance cameras to perform placement optimization. Finally, the placement results of these strategies are critically compared in terms of performance and efficiency. The chapter also highlights the strengths and weaknesses of each approach. In this chapter, Section 7.2, considers background topics associated with camera placement; Section 7.3 illustrates the modeling of a visual sensor in 2D space. Section 7.4 deliberates on solving the camera placement problem using artificial intelligence search, and this includes the generate and test, uninformed search, and hill climbing algorithms. Section 7.5 further extends the discussion to include the efficiency and the performance of these algorithms. Finally, Section 7.6 concludes by demonstrating a summary of the topics discussed in the chapter.



Selecting proper surveillance camera installation measures is commonly classified as combinatorial optimization, usually associated with the NP-hard problems class [14,67,68]. That is, when the state space size increases, the computation time for exhaustive search will be tremendously large, even for small size state space [69]. To fully understand the previous argument, let us consider a simple analogy of a typical discrete optimization problem. Given that a person needs to travel from city A to city C via an intermediate city B, where three different buses are available from A to B and two buses are available from B to C. Moreover, the costs of the buses from A to B are 2 $, 1$ and 2.5$, and from B to C the costs 2$, 3$ respectively. (refer to Fig. 7.2). The objective is to reach city C while spending a minimum possible cost. The objective for this simple case can be defined as minimizing the cost of traveling from A to C; this problem type can be solved intuitively by exploring all the possible combinations of traveling costs from A to C to find the minimum one. In this analogy, based on the number of ways to reach from A to B and from B to C, it is easy to verify that there are six available solutions. The set of all possible solutions forms what is known as the state space of the problem. The intuitive solution procedure we have followed represents the optimization technique. The complexity of such a problem increases dramatically when we introduce additional constraints in the objective such as the time to reach the destination, as well as when increasing the number of choices available. Equipped with the basic understanding of the state space, we can move on to forming the camera placement problem in the same way. Consider installing a single camera in a predetermined location. The objective here is to choose the most appropriate orientation angle which maximizes the camera coverage. Fig. 7.3 illustrates the case of a single camera while varying the orientation angle. Artificial intelligence searching strategies are extensively used to address the camera placement problem [70]. However, before diving into the placement optimization, we need to discuss the modeling of the camera coverage and FIGURE 7.2 A simplified analogy illustrates three cities and the generation of a set of possible choices to travel from city A to city C based on the available routes and transportation means (the generation of a search space based on three nodes).

Using artificial intelligence search in solving the camera placement problem Chapter | 7


FIGURE 7.3 An illustration of the generation of a set of possible choices of camera coverage based on altering the orientation to maximize the coverage.

the sensor space. The modeling of the field of view of a visual sensor has been categorized into two main categories [27]: the first category involves the estimation of the visible scene parts viewed by a camera; this is usually referred to as a camera geometric coverage model. The geometric coverage model attempts to estimate the coverage of a particular area unit in Rn space, and in some cases, assesses the coverage quality [27]. The second category is concerned with describing the relationships between the cameras and their respective fields of view. This model is usually known as the topological coverage model [27]. On the other hand, the visual sensor space modeling is typically carried out using uniformly disseminated grid points. Accordingly, the sensor space is represented in a 2D or 3D space [71,72]. To apply the camera model to the sensor space model, a visibility assessment routine is needed. Thus the visibility of the grid points contained in the field of view is assessed based on the topology of the given scene represented by the sensor space model. The modeling and visibility assessment requires extensive computational requirements. However, a common method to reduce the computational requirements is grouping grid points of the grid into some forms of geometrical shapes such as a rectangle in 2D space [29], or polygons [73,74]. The search strategies used for camera placement are heuristic in nature and seek to establish the near-optimal placing and/or configurations of the camera network. The optimization variables include the discrete range of locations, orientations, and camera types. The generation of new states is done by substituting a unique set of optimization variables [7375]. There are various forms for solving the camera placement problem using artificial intelligence searching strategies. Those variations occurred due to the choice of the initial conditions and the objective function [24]. For instance [18] has introduced a greedy strategy in which the algorithm maximizes the set of visible control points for each camera. The adequate coverage assessment of a control point is realized if the point falls in the visible portion of the field of view. The algorithm determines the near-optimal solution of the placement problem by searching the cameras which cover more control points. While in [24], both the greedy and greedy heuristic search algorithms are presented to solve two problem formulations known as MIN and FIX problems. In both cases (i.e., MIN and FIX), the greedy algorithm proceeds with the adequate coverage check and computes the cost. Based on the computed cost, the camera is chosen, and the set of uncovered points is updated along with the set of available cameras. In the exchange of the camera variables, the algorithm looks for the maximum achievable cost, where the output is the near-optimal solution of camera placement [24]. This chapter is planned to serve two main purposes: first, to present the commonly used search techniques in solving the camera placement problem where the attempt is to further explain the various artificial intelligence searching strategies used for solving the camera placement problem; and second, to assist new researchers in the field by providing a comprehensive step-by-step analysis of these techniques. The chapter concentrates on three main approaches: generate and test, uninformed search, and hill climbing algorithms. These approaches are considered with a random restart setting. The explanation of the various searching techniques is given in the form of empirical study, where two maps are used. The first map is an artificial map that describes a city complex, and the second map is a real-world map that shows part of a university campus. Moreover, the chapter models three commercial surveillance cameras to perform placement optimization. Finally, the placement results of these strategies are critically compared in terms of their performance and efficiency.


Modeling the visual sensors

Regardless of how the optimization method is performed, a complete camera coverage table is expected to be generated in advance based on the available cameras and the environment representation. Those coverage instances are required for coverage cost calculation (i.e., objective function). From an artificial intelligence perspective, the complete table of


Applications of Computational Intelligence in Multi-Disciplinary Research

camera coverage instances represents the backbone of the search space. Thus the coverage table must be calculated and stored prior to the optimization computations [18,27]. In that context, to calculate the coverage table, there is a need to determine the modeling dimension and the visibility analysis utilization. The following subsection explains the modeling of the sensor space, the modeling of the camera coverage, and the assessment of the camera visibility.

7.3.1 The sensor space modeling In the camera placement problem, the commonly used sensor space representation is a digitized 2D space. The 2D planes are preferred due to the simplicity of deriving the model and the low computational requirement. The sensor locations are assumed to be a finite set of potential camera locations [18]. Considering all the points on the sensor space as potential camera positions is impracticable due to the tremendous growth in the computational requirements. Thus researchers attempt to select a subset of points to be used as potential camera sites. In order to model the camera coverage layout, the equivalent binary masks of the input maps are used for visibility testing. Moreover, two top-view maps are utilized in the simulation. The first map is an artificial map that depicts a city complex, and the second map is a real-world map that describes a part of a university campus. The equivalent binary masks of those input maps are used for visibility analysis [76,77]. The input maps are digitized, where each pixel represents a grid point in the area of interest. Moreover, six and eight possible camera locations are chosen as potential camera locations for the first and second maps, respectively. Furthermore, three CCTVs are utilized in the modeling of the visual sensors’ coverage. Each camera has a unique focal length and vertical and horizontal angles. Table 7.1 shows the specifications of three CCTV cameras used in the empirical study, while Fig. 7.4 shows the original maps (A) and the respective binary masks (B).

7.3.2 The camera coverage modeling To carry out the empirical study, the chapter assumes that the sensors are positioned in 3D space to observe a 3D floorplan [28]. Moreover, the orientation of the camera is obtained by yaw and pitch angles. The modeling process neglects the roll angle, and it is assumed to be always zero due to its minimal effect on the viewed scene. The layout of the camera field is restricted in the central direction using two arcs. The camera height, orientation, and range are the parameters required to establish the boundaries of the field of view (i.e. an annular sectorial structure) [77]. A more in-depth discussion on the modeling of the geometrical coverage of the camera can be found in [73,74,77]. The ability of the camera model to reproduce an accurate field can be seen in (1) precise camera vision range, due to applying the object to the screen display height ratio; and (2) accurate reproduction of the invisible zone (i.e., sectorshaped area falls beneath the camera). Fig. 7.5A,B outline the relativity among the camera measurements, the computed camera range, and the produced layout of the field of view. The following equations provide the underlying mathematical concepts [21]:   wI α 5 2 3 arctan (7.1) 23f where the layout of the field is represented by F; sensor focal length is f; α is the horizontal angle; r is the field extension; wI and hI represent the image sensor’s width and height, respectively; hO stands for object height; and p represents satisfactory object height percentage. The satisfactory human figure signature occupies a 3 3 9 pixel window. TABLE 7.1 The specifications of the simulated cameras. Image sensor type

1 3 vCMOS

1 3 vCMOS

1 3 vCMOS

Sensor size width (mm) Sensor size height (mm) Focal length (mm) Horizontal angle Vertical angle Screen resolution width Screen resolution height

3.30 1.50 2.80 100 56 1280 720

2.90 1.80 3.60 78 53 1280 720

2.30 2.00 6.00 42 37 1305 1049

Using artificial intelligence search in solving the camera placement problem Chapter | 7


FIGURE 7.4 A top view of the initial map and the respective B/W mask for a part of a university campus. (A) Top artificial view of a city complex. (B) Top real-world view of a university campus complex. (C, D) The respective binary masks.

A ratio of the monitor display height to the height of a viewable object is used here to assess the effective range of the modeled cameras [61,77]. The invisible regions within the produced camera field of view can be computed using the following formulas: 1 Ca 5 Pa 2 Va 2 rs 5 h 3 arctanðCa Þ

(7.5) (7.6)

where Va refers to the vertical angle of the field of view, Pa stands for the pitch angle, Ca is the confined angle between Pa and Va, rs is the radius of the shadowy sector, and h is the elevation of the camera.

7.3.3 The analysis of camera visibility Analyzing the visibility of the camera requires a systematic assessment of all the grid points contained in the camera field of view. Thus the field of view must be digitized and then tested with respect to the topology of the given scene. Intersecting with the premises of any obstacle in the scene results in establishing an invisible line in the field of view. Bresenham’s line drawing algorithm is one of the approaches used in digitizing lines in computer graphics. This line discretization routine is usually implemented to discretize the produced field-of-view following the straight lines scheme [77] (refer to Fig. 7.6). The topological features of the scene as well as the boundaries of the map define the visibility limitation. The camera visibility is evaluated using pixels’ intensities, and the occurrence of intersections changes the particular ray to invisible. Furthermore, to eliminate nondiscretized points, the line tracing result is processed using a morphological operation. Fig. 7.6 shows the pseudocode expressing the steps of tracking the image elements ray via Bresenham’s line drawing algorithm.


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 7.5 The camera range and parameters. (A) The visibility extent versus camera parameters (generated based on [21]). (B) The visibility extent versus field layout.

  wI F5r 3 2f 2


f 3 hO hI 3 p



 w I 3 hO 2 f 2ðhI 3 pÞ2



FIGURE 7.6 Bresenham’s line drawing algorithm, which is used for analyzing the visibility of the camera in modeling the camera coverage.


Solving the camera placement problem using artificial intelligence search

The artificial intelligence searching methods are known to be very fast, but there is no guarantee of finding the best approximate solution. The variability of the solutions is regarded to the search methodology and the type of the cost function used to obtain the solution. In formulating a solution to the camera placement problem, the breadth of the search state space is determined by the combination of the orientation angles and camera types. For example, assuming that the orientation angle set

Using artificial intelligence search in solving the camera placement problem Chapter | 7


FIGURE 7.7 The concept of the breadth and depth of the searching state space when formulating the camera placement problem to be solved using artificial intelligence search methods.

contains 36 angles, and the camera type set contains 3 cameras, then the breadth of the state space will be the product of multiplication between those two factors, which results in 36 3 3 5 108 state. On the other hand, the depth of the search state space is determined by the number of potential camera locations. Assuming that the number of the potential camera locations is equal to L, then the depth of the state space is also will be equal to L. Hence, the total number of possible solutions will be L(O 3 K), where O and K is the number of elements in the orientation angles and camera types set. Fig. 7.7 illustrates the concept of generating the searching state space based on the various parameters used in the optimization (i.e., camera location, orientation, and type). The order of the camera locations has an undeniable impact on the produced solution. That is, the best solution produced given a particular camera positioning order is not necessarily the best approximation solution available. Many search algorithms adopt a randomization restart mechanism to overcome the impact of the order of the camera locations. The concept of randomization restart is crucial to get better solutions from the state space. The chapter focuses on three main approaches: G G G

Generate and test Uninformed search Hill climbing strategies.

These strategies are evaluated while considering a randomization restart configuration, which results in establishing a solution set. However, the final solution is selected from the generated solution set and the used selection criterion is the amount of covered area.

7.4.1 Generate and test algorithm This algorithm generates complete possible solutions in a depth-first search procedural scheme. The generate and test algorithm can be implemented on both a tree and a search graph. The basic steps for performing generate and test algorithm is given as follows: 1. Establish a possible solution (establishing a point, a path, etc.). This step depends heavily on the nature of the problem to be solved. 2. Verify the obtained solution (i.e., evaluating the solution with respect to the satisfactory goal states set). 3. If a satisfactory solution is located, quit; otherwise, go to step 1. Considering the camera placement problem, the generate and test algorithm generates a complete solution in the first step. The generation of the complete solution relies on randomly selecting a sequence of integers to represent the chosen location, orientation, and camera type. Consequently, the second step evaluates the generated solution in terms of the coverage amount. Finally, the third step determines if the generated solution is fit to be considered as a final solution; otherwise, the algorithm generates a different solution by repeating the procedure starting from step 1. Thus the algorithm converges by either finding an acceptable near-optimal solution or by reaching the maximum number of iterations allowed. The selection of the acceptable solution is achieved by finding the maximum coverage among the solution set. Fig. 7.8 shows the outcomes of the solution obtained using the generate and test algorithm.


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 7.8 The outcomes of the generate and test algorithm. (A, B) The deployed cameras’ layout. (C, D) The local coverage per camera location. (E, F) The accumulated coverage per camera location.

As can be seen in Fig. 7.8A, B, each camera type layout is viewed in a different color, where light blue, brown, and green refer to the first, second, and third camera, respectively. The obtained coverage per camera location is shown in Fig. 7.8C, D while Figs. 7.8E, F show the obtained global coverage over the camera location.

7.4.2 Uninformed search The uninformed search, also called blind search, works with no information about the search space. Uninformed search is applicable when the goal is to locate a solution which maximizes specific criterion among several potential alternatives. The uninformed search distinguishes locally the goal state from all the others. Accordingly, the searching algorithm moves between the states in the search and by finding local goals, the candidate solutions can be found. The performance of this search algorithm is computationally efficient; however, locating the near-optimal solution is not guaranteed. In our case here, we assume the only available information is the amount of coverage of the local states.

Using artificial intelligence search in solving the camera placement problem Chapter | 7

Eq. (7.7) illustrates the formulation of the uninformed search algorithm: " # L K X O X M X N X X max cl;k;o;i; j Cl 5 l51



k51 o51 i51 j51

where c is the instance of the generated field of view established at l (i.e., position), k (i.e., camera type), and o (yaw angle). The term Cl refers to the previous satisfactory coverage, which includes all the locations from l 5 1 up to l-1. Accordingly, when l 5 L, the CL yields the acceptable coverage established by the various optimization. The basic steps for performing the blind search algorithm are given as follows: 1. Select a camera position randomly to start the search. 2. Check if the selected position has been explored before; if yes, select another position; otherwise, store the selected position in the explored position set. 3. In the selected camera position, iterate over the local states selected from the local neighborhood (i.e., camera orientation and type), typically based on evaluating the amount of coverage. 4. Once a maximum coverage at a particular position is found, repeat steps 1 to 3. 5. Upon reaching the last position, return the solution. For the camera placement problem, the blind search algorithm generates a complete solution in a stage-wise manner. The stages determine the depth of the search space and are characterized by the camera location set. The generation of the complete solution depends on finding the maximum amount of coverage per camera position. Consequently, the same procedure is repeated for each new camera location till the last location in the location set is reached. Thus the final solution is formed based on the combination of the obtained local solutions. Fig. 7.9 shows the outcomes of the solution obtained using the blind search algorithm. In Fig. 7.9A, B, each camera type layout is viewed in a different color, while Fig. 7.9C, D, E, and F show the obtained local and global coverage over the camera location. As can be seen from Fig. 7.9, the blind search utilized camera types one and two only and ignored camera type three. The decision of ignoring a particular camera type is solely regarded to the contribution of that camera type in the generated solution. This kind of information assists in building purchase plans for future systems.

7.4.3 Hill climbing strategy Hill climbing greedy strategies are used in solving the camera placement problem due to the simplicity and computation efficiency [18,60]. The hill climbing formulation looks at the problem in a stage-wise manner and seeks a set of local maxima, which results in a global near-optimal solution [18,24,30,78]. Thus the problem is decomposed into a set of subproblems and an iterative procedure places one camera at a time [53]. Thus the algorithm constructs the partial solutions by solving the individual subproblems and the global solution is obtained by summing up all the partial solutions. Thus the challenge is maximizing the sum of the local states over the location set L. Here the obtained solution in any location n , L is considered an approximate near-optimal solution at that location. The following equation formulates the solution to the camera placement problem using a hill climbing algorithm: " # L K X O X X Cl 5 max cl;k;o;i; j ⨂Cl21 (7.8) l51

k51 o51

where, the produced field c is the result of substituting the l location, k camera type, and o orientation angle. The obtained coverage is referred to as CL. At each location, the solution is obtained using an OR operator. That is, the overlapping of the cameras is excluded from the computation of the current solution and the algorithm seeks only to maximize the primary coverage of the camera network. The basic steps for performing a hill climbing search algorithm is given as follows: 1. Select a camera position randomly to start the search. 2. Check if the selected position has been explored before; if yes, select another position; otherwise, store the selected position in the explored position set. 3. In the selected camera position, iterate over the local states selected from the local neighborhood (i.e. camera orientation and type), typically based on evaluating the amount of global coverage gain.


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 7.9 The outcomes of the blind search algorithm. (A, B) The deployed cameras’ layout. (C, D) The local coverage per camera location. (E, F) The accumulated coverage per camera location.

4. Once the maximum coverage gain at that position is found, repeat steps 1, 2, and 3. 5. Upon reaching the last position, return the solution. For the camera placement problem, the hill climbing algorithm generates a complete solution in a stage-wise manner. The generation of the complete solution depends on finding the local states which maximize the global coverage gain. Consequently, the same procedure is repeated for each new camera location till the last location in the location set is reached. Thus the final solution is formed based on the combination of the obtained stage-wise solutions. Fig. 7.10 shows the outcomes of the solution obtained using the hill climbing algorithm. In Fig. 7.10A, B, each camera type layout is viewed in a different color, where light blue, brown, and green refer to the first, second, and third cameras, respectively. Fig. 7.10C, D, E, and F show the obtained local and global coverage over the camera location.

Using artificial intelligence search in solving the camera placement problem Chapter | 7


FIGURE 7.10 The outcomes of the blind search algorithm. (A, B) The deployed cameras’ layout. (C, D) The local coverage per camera location. (E, F) The accumulated coverage per camera location.


Further discussion

This chapter carries out an empirical study using two maps as floor plans with six and eight predefined camera locations in the first and the second maps, respectively. The first map is an artificial map that depicts a city complex, and the second map is a real-world map that shows a part of a university campus. The implementation of 2D maps as input to coverage optimization algorithms is realized by obtaining a top-view image map of the area of interest. Obtaining the input maps is followed by the binarization and initialization of the camera locations. Moreover, three commercial surveillance cameras are modeled and have been used throughout the optimization computations. Furthermore, 36 angles are used to form the orientation angles set during the simulation. Finally, Bresenham’s line drawing algorithm is used for the camera visibility assessment.


Applications of Computational Intelligence in Multi-Disciplinary Research

The chapter involves reviewing three common search algorithms used in solving the camera placement problem, namely generate and test, uninformed search, and hill climbing strategies. These strategies are evaluated while considering a randomization restart configuration. The concept of the random restart aims to find the best possible solution from the solution set by allowing a randomized selection of the camera locations. Thus a random order of the potential camera locations is used in each iteration to produce a different solution, and the simulation is carried out over 20 iterations (i.e., altering the initial order of the camera location set). In this section, the obtained results of implementing the various search techniques are further discussed and evaluated in terms of the efficiency and performance of each algorithm.

7.5.1 The efficiency of the algorithms Based on the conducted empirical study, the hill climbing approach with globally assessed objective function and a random restart approach is proven to obtain a higher coverage as compared to the generate and test and uninformed search methods. The reason for that is regarded to the ability of the hill climbing algorithm to construct a global objective function as compared to locally evaluated objective functions used by generate and test and uninformed search. Fig. 7.11 shows the outcomes of the obtained solutions over the simulation iterations for hill climbing algorithm, uninformed search, and generate and test search. As can be seen from Fig. 7.11, the randomization restart does not affect the outcome of the uninformed search. That is due to the focus on locating the local maximum rather than using a globally computed objective function. Although the graph shows high coverage gain obtained by the uninformed search approach, the hill climbing algorithm is proven to have the maximum coverage gain among the compared benchmark search techniques. On the other hand, the generate and test approach produces minimal coverage gain due to the poor objective function which neglects the impact of local- and global-based goals. FIGURE 7.11 The obtained solutions over the simulation iterations for hill climbing algorithm, uninformed search, and generate and test search. (A) The first case study. (B) The second case study.

Using artificial intelligence search in solving the camera placement problem Chapter | 7


FIGURE 7.12 The time elapsed for the hill climbing search, generate and test search, and uninformed search. (A) Hill climbing algorithm for the first case study. (B) Generate and test search and uninformed search for the first case study. (C) Hill climbing algorithm for the second case study. (D) Generate and test search and uninformed search for the second case study.

7.5.2 The performance of the algorithms Fig. 7.12 shows the elapsed time in seconds for the hill climbing search, generate and test search, and uninformed search. The hill climbing results are shown in (A, C) and the results of the generate and test search and uninformed search algorithms are shown in (B, D) for the first and second case studies, respectively. The reported poor performance of the hill climbing algorithm in terms of elapsed time is regarded to the size of explored state space. The generate and test algorithm explored only the L state to produce a single complete solution. That makes the time complexity of the algorithm a constant L time (i.e., the number of potential locations). The uninformed search explores only L 3 O 3 K states, where L is the number of potential locations, O the orientation angles set, and K is the camera type set. Thus the time for the blind search algorithm is a linear time complexity of the uninformed search algorithm. On the other hand, at each iteration, the global search explores 2L 3 (O 3 K) states to locate the maximum coverage. Thus that explains the high computational time, which is a serious drawback in terms of algorithm design. However, practically, the high computational time has a limited impact on the robustness and reliability of the method due to the offline nature of surveillance system planning.



This chapter carries out an empirical study using two maps as floor plans with six and eight predefined camera locations for each of the maps, respectively. The first map is an artificial map that depicts a city complex, and the second map is


Applications of Computational Intelligence in Multi-Disciplinary Research

a real-world map that shows a part of a university campus. The implementation of 3D maps as input to coverage optimization algorithms is realized by obtaining a top-view image map of the area of interest. Binary masks of the input maps are used in the coverage computation and visibility analysis. Moreover, three commercial surveillance cameras are modeled and have been used throughout the optimization computations. Furthermore, 36 angles are used to form the orientation angles set during the simulation. Finally, Bresenham’s line drawing algorithm is used for the camera visibility assessment. The chapter involves reviewing three common search algorithms used in solving the camera placement problem, namely (1) generate and test, (2) uninformed search, and (3) hill climbing algorithms. These strategies are evaluated while considering a randomization restart configuration. The concept of the random restart aims to find the best possible solution from the solution set by allowing a randomized selection of the camera locations. The chapter also illustrates the impact of the randomization restart on the obtained solutions. An analytical comparison is carried out and reveals that the hill climbing algorithm is guaranteed to produce a satisfactory near-optimal solution in a realistic time frame. That is because of the type of objective function used as well as the number of states explored to find the solutions. On the other hand, the generate and test and the uninformed search strategies failed to produce competitive solutions. The hill climbing algorithm reported poor performance in terms of the required execution time due to the size of explored state space. However, from a practical point of view, the high computational time has a limited impact on the robustness and reliability of the method due to the offline nature of surveillance system planning.

References [1] R.R. C Lakshmi Devasena, Video surveillance systems—a surveyand M. H IJCSI—International Journal of Computer Science 8 (8) (2011) 8. [2] N. Haering, P.L. Venetianer, A. Lipton, The evolution of video surveillance: an overview, Machine Vision and Applications 19 (56) (2008) 279290. Available from: [3] P.L. Venetianer, H. Deng, Performance evaluation of an intelligent video surveillance system—a case study, Computer Vision and Image Understanding 114 (11) (2010) 12921302. Available from: [4] C.A.R. Behaine, J. Scharcanski, Enhancing the performance of active shape models in face recognition applications, IEEE Transactions on Instrumentation and Measurement 61 (8) (2012) 23302333. Available from: [5] R. Gross, V. Brajovic, An image preprocessing algorithm for illumination invariant face recognition, Audio and Video Based Biometric Person Authentication (2003) 1018. Available from: [6] L. Lee, R. Romano, G. Stein, Monitoring activities from multiple video streams: establishing a common coordinate frame, IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (8) (2000) 758767. Available from: [7] S.S.S. Shan, W.G.W. Gao, B.C.B. Cao, D.Z.D. Zhao, Illumination normalization for robust face recognition against varying lighting conditions, in: 2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443), 2003, 07. [8] Z. Zhang, Background noise distribution before and after high-resolution processing in ship-borne radar, Chinese Journal of Electronics 14 (1) (2005) 115118. Available from: [9] J.M. McHugh, J. Konrad, V. Saligrama, P. Jodoin, Foreground-adaptive background subtraction, Signal Processing Letters, IEEE 16 (5) (2009) 390393. Available from: [10] B. Bose, E. Grimson, Improving object classification in far-field video, in: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, 2004, IIII. , [11] W. Hu, T. Tan, L. Wang, S. Maybank, A survey on visual surveillance of object motion and behaviors, IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews) 34 (3) (2004) 334352. Available from: [12] O. Javed, Z. Rasheed, O. Alatas, M. Shah, 2003. KNIGHT: a real time surveillance system for multiple and non-overlapping cameras, in: 2003 International Conference on Multimedia and Expo. ICME ’03. Proceedings (Cat. No.03TH8698), 1, I649. ICME.2003.1221001 [13] J. Liu, S. Sridharan, C. Fookes, Recent advances in camera planning for large area surveillance, ACM Computing Surveys 49 (1) (2016) 137. Available from: [14] U.M. Erdem, S. Sclaroff, Automated camera layout to satisfy task-specific and floor plan-specific coverage requirements, Computer Vision and Image Understanding 103 (3) (2006) 156169. Available from: [15] A. Mittal, Generalized multi-sensor planning, in: A. Leonardis, H. Bischof, A. Pinz (Eds.), Computer Vision—ECCV 2006, vol. 3951, 2006, pp. 522535. [16] D. Fehr, L. Fiore, N. Papanikolopoulos, Issues and solutions in surveillance camera placement, in: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2009, 2009, pp. 37803785. , [17] H. Gonzalez-Banos, J.-C. Latombe, A randomized art-gallery algorithm for sensor placement, in: Proceedings of the Annual Symposium on Computational Geometry (SCG’01), 9, 2001, 232240. , [18] E. Horster, R. Lienhart, On the optimal placement of multiple visual sensors, in: Proceedings of the 4th ACM International Workshop on Video Surveillance and Sensor Networks, 2006b, pp. 111120. ,

Using artificial intelligence search in solving the camera placement problem Chapter | 7


[19] Z. Jian, D. Haws, R. Yoshida, S.S. Cheung, Approximate techniques in solving optimal camera placement problems, IEEE International Computer Vision Workshops (2011) 17051712. Available from: [20] A. Aggarwal, The Art Gallery Theorem: Its Variations, Applications and Algorithmic Aspects, The Johns Hopkins University, Baltimore, MD, 1984. [21] J.S. Mikael Pa˚lsson, C. Levcopoulos, The camera placement problem—an art gallery problem variation, University, Lund, Sweden, 2008. [22] J. O’Rourke, Art Gallery Theorems and Algorithms, vol. 57, Oxford University Press, Oxford, 1987. [23] G. Safak, The Art-Gallery Problem: A Survey and an Extension, Royal Institute of Technology, Stockholm, 2009. [24] J. Zhao, R. Yoshida, S.S. Cheung, D. Haws, Approximate techniques in solving optimal camera placement problems, International Journal of Distributed Sensor Networks 9 (11) (2013). Available from: [25] Z. Jian, S.S. Cheung, N. Thinh, Optimal camera network configurations for visual tagging, IEEE Journal of Selected Topics in Signal Processing 2 (4) (2008) 464479. Available from: [26] M.M. Hussein, Visual sensor networks : challenges and opportunities, 2011. [27] A. Mavrinac, X. Chen, Modeling coverage in camera networks: a survey, International Journal of Computer Vision 101 (1) (2013) 205226. Available from: [28] Y.-G. Fu, J. Zhou, L. Deng, Surveillance of a 2D plane area with 3D deployed cameras, Sensors 14 (2) (2014) 19882011. Available from: [29] K. Yabuta, H. Kitazawa, Optimum camera placement considering camera specification for security monitoring, in: IEEE International Symposium on Circuits and Systems, 2008, pp. 21142117. , [30] E. Horster, R. Lienhart, Approximating optimal visual sensor placement, in: 2006 IEEE International Conference on Multimedia and Expo, IEEE, 2006a, pp. 12571260. [31] A.C. Hamid Aghajan, in: A.C. Hamid Aghajan (Ed.), Multi-Camera Networks: Principles and Applications, first ed., Elsevier, London, 2009. [32] F. Porikli, F. Bremond, S.L. Dockstader, J. Ferryman, A. Hoogs, B.C. Lovell, et al., Video surveillance: past, present, and now the future [DSP forum], IEEE Signal Processing Magazine 30 (3) (2013) 190198. Available from: [33] J. Liu, Y. Li, S. Chen, Robust camera calibration by optimal localization of spatial control points, IEEE Transactions on Instrumentation and Measurement 63 (12) (2014) 30763087. Available from: [34] I. Pavlidis, V. Morellas, P. Tsiamyrtzis, S. Harp, Urban surveillance systems: from the laboratory to the commercial world, Proceedings of the IEEE 89 (10) (2001) 14781497. Available from: [35] A. Yilmaz, O. Javed, M. Shah, Object tracking: a survey, ACM Computing Surveys 38 (4) (2006). Available from: 1177352.1177355. 13-es. [36] A.B. Chan, Z.-S. John Liang, N. Vasconcelos, Privacy preserving crowd monitoring: counting people without people models or tracking, in: 2008 IEEE Conference on Computer Vision and Pattern Recognition, 2008, 17. Available from CVPR.2008.4587569. [37] T. Li, H. Chang, M. Wang, B. Ni, R. Hong, S. Yan, Crowded scene analysis: a survey, IEEE Transactions on Circuits and Systems for Video Technology 25 (3) (2015) 367386. Available from: [38] A.F. Abate, M. Nappi, D. Riccio, G. Sabatino, 2D and 3D face recognition: a survey, Pattern Recognition Letters 28 (14) (2007) 18851906. Available from: [39] K. Delac, M. Grgic. A survey of biometric recognition methods, in: Electronics in Marine, 2004. Proceedings Elmar 2004. 46th International Symposium, 2004, pp. 184193. [40] A.G. Vicente, I.B. Munoz, P.J. Molina, J.L.L. Galilea, Embedded vision modules for tracking and counting people, IEEE Transactions on Instrumentation and Measurement 58 (9) (2009) 30043011. Available from: [41] Y. Charfi, N. Wakamiya, M. Murata, Challenging issues in visual sensor networks, IEEE Wireless Communications 16 (2) (2009) 4449. Available from: [42] T. Semertzidis, K. Dimitropoulos, A. Koutsia, N. Grammalidis, Video sensor network for real-time traffic monitoring and surveillance, IET Intelligent Transport Systems 4 (2) (2010) 103. Available from: [43] B. Tseng, C. Lin, J. Smith, Real-time video surveillance for traffic monitoring using virtual line analysis, in: Multimedia and Expo, 2002. ICME’02. Proceedings. 2002 IEEE International Conference on, 2002, pp. 541544. [44] X. Cao, Y. Wei, F. Wen, J. Sun, Face alignment by explicit shape regression, International Journal of Computer Vision 107 (2) (2014) 177190. Available from: [45] R. Chellappa, C.L. Wilson, S. Sirohey, Human and machine recognition of faces: a survey, Proceedings of the IEEE 83 (5) (1995) 705741. Available from: [46] M. Grgic, K. Delac, S. Grgic, SCface—surveillance cameras face database, Multimedia Tools and Applications 51 (3) (2011) 863879. Available from: [47] M.A. Hassan, A.S. Malik, N. Saad, B. Karasfi, Y.S. Ali, D. Fofi, Optimal source selection for image photoplethysmography, in: Conference Record—IEEE Instrumentation and Measurement Technology Conference, 2016 July(October), 2016. [48] T. Qiu, Y. Yan, G. Lu, An autoadaptive edge-detection algorithm for flame and fire image processing, IEEE Transactions on Instrumentation and Measurement 61 (5) (2012) 14861493. Available from: [49] K. Dimitropoulos, P. Barmpoutis, N. Grammalidis, Higher order linear dynamical systems for smoke detection in video surveillance applications, IEEE Transactions on Circuits and Systems for Video Technology 27 (5) (2017) 11431154. Available from: TCSVT.2016.2527340.


Applications of Computational Intelligence in Multi-Disciplinary Research

[50] S. Abtahi, S. Shirmohammadi, B. Hariri, D. Laroche, L.Martel, A yawning measurement method using embedded smart cameras, in: 2013 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), 2013, 16051608. I2MTC.2013.6555685 [51] S. Shirmohammadi, A. Ferrero, Camera as the instrument: the rising trend of vision based measurement, IEEE Instrumentation & Measurement Magazine 17 (3) (2014) 4147. Available from: [52] N.S. Punn, S.K. Sonbhadra, S. Agarwal, Monitoring COVID-19 social distancing with person detection and tracking via fine-tuned YOLO v3 and deepsort techniques,, 2020. [53] Z. Abrams, J. Liu. Greedy is good: on service tree placement for in-network stream processing, in: 26th IEEE International Conference on Distributed Computing Systems (ICDCS’06), 2006, pp. 7272. [54] H. Dee, S. Velastin, How close are we to solving the problem of automated visual surveillance? Machine Vision and Applications 19 (56) (2008) 329343. Available from: Retrieved from. [55] P.K. Agarwal, E. Ezra, S.K. Ganjugunte, Efficient sensor placement for surveillance problems. in: International Conference on Distributed Computing in Sensor Systems, 5516 LNCS, 2009, pp. 301314. [56] S. Fleishman, D. Cohen-Or, D. Lischinski, Automatic camera placement for image-based modeling, Computer Graphics Forum 19 (2) (2000) 101110. Wiley Online Library. [57] J. Liu, S. Sridharan, C. Fookes, T. Wark, Optimal camera planning under versatile user constraints in multi-camera image processing systems, IEEE Transactions on Image Processing 23 (1) (2014) 171184. Available from: [58] A.H. Albahri, A. Hammad, A novel method for calculating camera coverage in buildings using bim, Journal of Information Technology in Construction 22 (December 2016) (2017) 1633. [59] F. Yap, H.-H. Yen, Novel visual sensor coverage and deployment in time aware PTZ wireless visual sensor networks, Sensors 17 (1) (2016) 64. Available from: [60] G. Zhang, B. Dong, J. Zheng, Visual sensor placement and orientation optimization for surveillance systems, in: 2015 10th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA), 2015, pp. 15. BWCCA.2015.19 [61] N. Cohen, K. MacLennan-Brown, J. Gattuso, CCTV Operational Requirements Manual. Home Office Scientific Development Branch, 2009. [62] R.N. Sreedevi Mittal, S. Chaudhury, A.I. Bhattacharyya, Camera placement for surveillance applications, Video Surveillance, InTechOpen, 2011. [63] J. Zhao, S.S. Cheung, Optimal visual sensor planning, in: 2009 IEEE International Symposium on Circuits and Systems, 2009, pp. 165168. [64] R. Avery, Y. Wang, G. Zhang, Video-based vehicle detection and classification system for real-time traffic data collection using uncalibrated video cameras, Transportation Research Record 1993 (1) (2007) 138147. Available from: [65] Dan Tao, Tin-Yu Wu, A survey on barrier coverage problem in directional sensor networks, IEEE Sensors Journal 15 (2) (2015) 876885. Available from: [66] H.A. Eiselt, V. Marianov, Foundations of location analysis. International Series in Operations Research & Management Science, first edition, vol. 155, Springer, New York, NY. [67] J. Liu, C. Fookes, T. Wark, S. Sridharan, On the statistical determination of optimal camera configurations in large scale surveillance networks, in: Eccv 2012, 2012, pp. 4457. [68] J. Zhao, S.C. Cheung, T. Nguyen, Optimal visual sensor network configuration, Multi-Camera Networks: Principles and Applications (2009) 139162. [69] R. Usamentiaga, J. Molleda, D.F. Garcia, J.C. Granda, J.L. Rendueles, Temperature measurement of molten pig iron with slag characterization and detection using infrared computer vision, IEEE Transactions on Instrumentation and Measurement 61 (5) (2012) 11491159. Available from: [70] X. Wang, H. Zhang, H. Gu, Solving optimal camera placement problems in IoT using LH-RPSO, IEEE Access 4 (2019) 1. Available from: [71] K. Chakrabarty, S.S. Iyengar, H. Qi, E. Cho, Grid coverage for surveillance and target location in distributed sensor networks, IEEE Transactions on Computers 51 (12) (2002) 14481453. Available from: [72] C. Wang, F. Qi, G.-M. Shi, Nodes placement for optimizing coverage of visual sensor networks, in: P. Muneesawang, F. Wu, I. Kumazawa, A. Roeksabutr, M. Liao, X. Tang (Eds.), Advances in Multimedia Information Processing—PCM 2009, vol. 5879, 2009, pp. 11441149. https:// [73] A.A. Altahir, V.S. Asirvadam, N.H.B. Hamid, P. Sebastian, Optimizing visual sensor parameters for total coverage maximization, in: 2014 IEEE Student Conference on Research and Development, 2014, pp. 16. [74] A.A. Altahir, V.S. Asirvadam, N.H.B. Hamid, P. Sebastian, Modeling camera coverage using imagery techniques for surveillance applications, in: 2014 IEEE International Conference on Control System, Computing and Engineering (ICCSCE 2014), 2014, pp. 4045. 10.1109/ICCSCE.2014.7072686 [75] V.S.S. Asirvadam, C.P.P. Fen, N. Saad, P. Sebastian, A.A.A. Altahir, A. Ali, et al. Risk map for video surveillance CCTV, in: 2014 5th International Conference on Intelligent and Advanced Systems: Technological Convergence for Sustainable Future, ICIAS 2014—Proceedings, 2014, pp. 16.

Using artificial intelligence search in solving the camera placement problem Chapter | 7


[76] A.A. Altahir, V.S. Asirvadam, N.H. Hamid, P. Sebastian, N. Saad, R. Ibrahim, et al., Optimizing visual surveillance sensor coverage using dynamic programming, IEEE Sensors Journal 17 (11) (2017) 33983405. Available from: [77] A.A. Altahir, V.S. Asirvadam, N.H. Hamid, P. Sebastian, N. Saad, R. Ibrahim, et al., Modeling multicamera coverage for placement optimization, IEEE Sensors Letters 1 (6) (2017) 14. Available from: [78] S. Indu, S. Chaudhury, N.R. Mittal, A. Bhattacharyya, Optimal visual sensor placement using evolutionary algorithm, in: National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics, 2008, pp. 15.

This page intentionally left blank

Chapter 8

Nanotechnology and applications Kanika Dulta, Amanpreet Kaur Virk, Parveen Chauhan, Paras Bohara and Pankaj Kumar Chauhan Faculty of Applied Sciences and Biotechnology, Shoolini University of Biotechnology and Management Sciences, Solan, India



The transistor was certainly one of the best inventions of the past century. The evolution and miniaturization of semiconductor devices provided a major advancement in computing, enabling the development of personal computers, mobiles, and high-performance computers. Nowadays, in various fields of science and technology, the development of projects without the aid of computational tools is totally unthinkable. The creation process of automobiles, ships, and aircraft, for example, uses computing power from the design of small pieces to the simulation of the aerodynamic behavior of the entire set. A variety of examples may be quoted here, but obviously, this is not the intent of this review. More recently, a new and emerging field of research has been gaining ground. This is computational nanotechnology or computational nanoscience, which utilizes computer techniques and algorithms for the progress of nanoscience and nanotechnology. Nanotechnology and nanoscience have been reported to have a great potential to provide benefits in several areas, such as the formulation of novel drugs, water, and air remediation; improving information and communication technologies; efficient energy production; and lighter and more durable materials. It is understood that the Romans fabricated glasses at the nanoscale utilizing metals in the 4th century BCE. For example, the presence of nanoparticles, such as gold and silver, produces a multitude of distinct and beautiful colors in the stained glass windows of medieval cathedrals. Nanotechnology offers a promising future for medical research, with advanced diagnostic medical sensors, an improvement in the immune system with medical nanodevices, tissue regeneration, and the regulation of aging. Structures and devices of the nanometer scale were found to be promising for the advancement of medicines like advanced biosensors, smart drugs, and immune isolation therapies. Regarding nanomedicine and nanostructured materials, for example, nanoparticles tagged with quantum dot nanocrystals to be used as biological markers and smart drugs that become active only in certain conditions are being studied [1]. However, it was only in recent decades that sophisticated tools have been developed to investigate and manipulate matter at the nanoscale, allowing the advancement of research and a significant increase in the number of discoveries and technological developments. The discoveries of scanning tunneling microscope and atomic force microscope in 1981 and 1986, respectively, were two important aspects of these new scientific and technical developments. In addition to being able to modify atoms and molecules, these tools utilize a nanometric tip to generate images of a surface with atomic resolution, allowing for the development of rudimentary nanostructures. The development of computational models of physical and chemical systems that enable scientists to simulate potential nanomaterials, devices, and applications is associated with the development of nanotechnology and nanoscience. Furthermore, the increased power of computing and 3D visualization techniques made such simulations even faster and more accurate. More recently, mainly due to the massive use of the Internet, there is also a large increase in the amount of scientific information available (journals, magazines, articles, websites, etc.). All these contribute significantly to the increased importance of computational nanotechnology. Some of these support systems for nanotechnology apply computational intelligence techniques, which is known as “intelligent computational nanotechnology.” Even so, it is possible to further explore the use of computational intelligence techniques to promote the advancement of nanotechnology and nanoscience as one is confronted with new areas of research. Computational intelligence offers different techniques inspired by nature, such as genetic algorithms (GAs), artificial neural networks (ANNs), and fuzzy systems, among others, employed for the advancements of intelligent systems. GAs are highly parallel search and optimization methods inspired by the principles of Darwinian natural selection and genetic reproduction that favor the Applications of Computational Intelligence in Multi-Disciplinary Research. DOI: © 2022 Elsevier Inc. All rights reserved.



Applications of Computational Intelligence in Multi-Disciplinary Research

fittest individuals living longer and therefore more likely to reproduce. Neural networks, however, are inspired by the structure and behavior of the human brain and have been used in a diversity of problems. Combining these smart techniques with nanoscience and nanotechnology will shorten the time and expense required by experts to design effective tools, many of which are now unimaginable. However, such techniques have not been much investigated in aid projects of nanosized devices. The expectation is that they can provide highly useful tools, thus enabling the rapid development of this new research area.


Nanoscience and nanotechnology

Nanoscience is the analysis and regulation of the finest particles at the nanoscale level, whereas nanotechnology is the application of nanoscience that leads to knowledge about how atoms and molecules organize into complex structures at the nanoscale. Nanotechnology deals with configurations of sizes ranging from 1 to 100 nm. Nanoscience and nanotechnology are well known to provide benefits in several areas, such as advancement of novel drugs, water, and air remediation; improving information and communication technologies; efficient energy production; and lighter and more durable materials. This area of research is concerned with matter structures having dimensions equal to the magnitude of one billionth of a meter. Micro- and macroscopic properties of the substances could be changed by altering the configuration of nanomaterials. Therefore charge mobility, magnetic effects, melting point, and other features can be changed without changing the chemical composition of the material used [2]. The beginning of contemporary nanoscience and nanotechnology can be attributed to the lecture delivered at the American Physical Society meeting in Caltex on December 29, 1959, by physicist Richard Feynman. He makes a lecture called “There’s plenty of room at the bottom,” in which he spoke about how one day, atoms and molecules could be manipulated and eventually shaped into something someone desired them to be. Then, he addressed the possibilities for us to develop incredibly small machines in the future that would act as tiny tools. This idea was earlier considered completely radical; we now see nanotechnology as a very real and potential technology in the upcoming future [3]. A crucial impetus for the growth of this field was provided in 1981 by Binnig and Rohrer when they invented the scanning tunneling microscope. Knoll and Ruska in 1931 developed the electron microscope, and in 1905, Einstein had evaluated the sugar molecule diameter to be 1 nm. The advancement of scanning tunneling microscope provided important tools to visualize, characterize, and manipulate nanostructures. The evolving field of nanoscience and nanotechnology enables the invention of computational and experimental tools required to design and produce nanoscale structures such as nanowires, quantum dots, nanocatalysts, and nanosensors (Fig. 8.1). FIGURE 8.1 Tools of nanoscience and nanotechnology.


Computational nanotechnology

Computational nanoscience and nanotechnology often employ experimental mathematical and computer simulation techniques to examine known nanomaterials and to discover new ones. Computational techniques also aid in the analysis, prediction, and measurement of molecular properties and structure of other nanostructures. It is the application and development of algorithms and computational systems used in order to aid the development of nanotechnology. Due to the massive use of the Internet, there is an increase in power and visualization techniques, and the amount of scientific information (journals, magazines, articles, websites, etc.). Computational modeling is a great platform in contrast to

Nanotechnology and applications Chapter | 8


experimental limitations. Within computational research, each parameter can be handled separately, and the method responsible for the experimental outcome can be defined. With a computational analysis, interactions can be simulated under various conditions that are not really possible to test in the laboratory [4]. All these development sectors contribute to computational nanotechnology. The growth of and developments in computational nanotechnology thus could play an extremely vital role in the entire development of nanotechnologybased novel materials, structures, devices, and applications. This is a vital tool for analyzing the physics and chemistry of nanoparticles. The hypothesis is formulated after performing a simulated experiment to describe the observed findings, which are then tested by conducting a laboratory experiment. If the results prediction and the theoretical values agree, then the postulation can be accepted. Theoretical approaches could also be used to analyze unexpected outcomes from experimental work, which also contribute to the emergence of new theories [3] (Fig. 8.2). Tackling the potential risks associated with nanomaterials may be useful. Outcomes from these theoretical calculations could offer insights for researchers working in the area of research. The expertise gained from computational research concerning nanoparticles’ interactions with biological structures would be helpful in developing algorithms to assess the severity of toxicity in a wide-ranging natural ecosystem. Computational experiments are valuable in understanding the exact nature of interparticle interactions, the interface arrangement, and the processing of arrays and superstructures which are challenging to analyze experimentally [5].

8.3.1 Molecular modeling According to Simpson and Weiner [6], a “model” is a simplified or idealized description of a system or process, often in mathematical terms, created to facilitate calculations and predictions. Therefore molecular modeling refers to methods and techniques to imitate the behavior of molecules or molecular systems. Computational techniques have revolutionized molecular modeling because, without the use of computers, many calculations would be impossible. These methods and techniques have been categorized into four distinct subgroups (Fig. 8.3). Three methods in Fig. 8.3, namely semiempirical, quantum, and molecular mechanics, deal with calculations of static molecules, that is, independent of time. Moreover, time-dependent modeling, called molecular dynamics (MD), has been classified into a separate subgroup. Molecular mechanics Molecular mechanics applies the laws of classical physics to describe the structures and properties of molecules. Various methods of molecular mechanics are available on many computer programs, such as MM3, HyperChem, Quanta, Sybyl, and Alchemy [7]. Contrary to quantum methods, these methods do not deal directly with the electrons in a molecular system. Instead, the computation is performed based on the interaction between cores. The electronic effects are usually implicit in the force fields, through the parameters. These estimations make the techniques of FIGURE 8.2 Subfields of computational nanotechnology.


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 8.3 The molecular modeling techniques.

molecular mechanics computationally cheap and allow them to be employed for large systems containing many thousands of atoms [8,9]. The energy resulting from a calculation of molecular mechanics is obtained by the sum of several terms. This set of terms, along with the kinds of atoms used and parameter sets, forms the force field. As each set of molecules has different properties, the creation of force fields is an open research area and each group can create a new force field for certain calculations. Quantum methods The quantum methods use the laws of quantum mechanics instead of the laws of classical physics as the basis for calculations. These methods are also known as ab initio. Quantum mechanics says that the energy and other properties related to a molecule must be obtained by solving the Schro¨dinger equation. However, the exact solution to the Schro¨dinger equation is not computationally feasible except for very simple systems. Methods of ab initio provide more precise definitions of the quantum mechanical properties of materials, although the existing system sizes are confined to only a few hundred atoms. Quantum chemical methods are characterized by their various mathematical approximations. One of the first developed methods was the HartreeFock (HF) method [10]. However, because of the large number of approximations, this approach poses many restrictions. The literature includes a variety of strategies that have arisen to overcome the shortcomings of the HF process. These methods are known as post-HF. Typically, these methods give better results but are computationally more expensive. Examples include CI, Full CI, MP2, MP3, and MP4 [10]. Another important quantum method is the density functional theory (DFT). This method is derived from research in quantum mechanics from the 1920s, especially from the model of ThomasFermiDirac and Slater’s fundamental work in quantum chemistry in the 1950s. The DFT method is based on the strategy of modeling electron correlation from the electronic density functional. This method originated from the work of Hohenberg and Kohn, published in 1964, showing the existence of a unique function that determines the fundamental state of energy and the density accurately [11]. However, the theorem does not provide this functional form. Then the work of KohnSham split the approximate functionals. Semiempirical When the principles of ab initio molecular orbital calculations first appeared, it became apparent that the number of arithmetic operations necessary to analyze simple structures was too large. More approximations were then drastically necessary. An alternative was to ignore all electrons except the valence ones. With these approaches, the calculations of the orbitals were possible with the aid of parameters. In this case, parameters are required for all elements of the system, but not for bonds, angles, etc., as occurs in molecular mechanics methods. This method is known as semiempirical. The first semiempirical method created was CNDO (complete neglect of differential overlap) and the parameters were obtained from ab initio calculations. By taking into consideration a large number of simplifications, the findings were quite good and could be assessed quickly. A similar method known as MINDO/3 was developed by Dewar [10,12] by following a philosophy of parameterization whose parameters were obtained from experimental data rather than ab initio calculations. Some of the methods that follow this alternative are MNDO, AM1, RM1, and PM3, which can be found in a large number of commercial and academic programs. An important advantage of semiempirical methods is the speed of calculation, allowing the study of larger molecules and systems. There is another advantage built. Because the methods are often parameterized from experimental data, some electron correlation effects are embedded in the calculations. However, this can also be a disadvantage because it is not known what effects are included, making it difficult to improve these calculations. Molecular dynamics Until now, the methods described here treat molecules and molecular systems in a static way. However, it is often important to know how the molecules interact with one another dynamically, that is, in motion. The method that

Nanotechnology and applications Chapter | 8


calculates the dynamics between molecules is known as MD [10,12]. It generally relates to the condition where the motion and atoms and molecules are treated in the approximate finite difference equation of Newtonian mechanics. The MD method typically implements an algorithm to find a numerical solution of a set of coupled first-order ordinary differential equations, given by the Hamiltonian formulation of Newton’s Second Law. The equations of motion are numerically integrated forward, in finite time steps O [2]. Classical molecular dynamics describes the atomic-scale dynamics of a system where atoms and molecules move while simultaneously interacting with many of the other molecules and atoms in the vicinity within a cutoff distance. It involves methods and techniques to mimic the behavior of molecules or molecular systems. This molecular modeling has been facilitated by the use of computers. It applies the laws of classical physics to describe the structures and properties of molecules. In the BornOppenheimer approximation, the forces between atoms are determined at every successive stage through mechanical calculations. The dynamic motion for ionic positions is governed by Newtonian or Hamiltonian mechanics. It allows assessing the time-dependent activities of a molecular system (the system’s physical motions of atoms and molecules). Comprehensive knowledge could be gained from molecular dynamics simulations on the fluctuations and structural changes of the molecules. Computer programs like MM3, HyperChem, Sybyl, Alchemy, and Quanta are used for molecular mechanics.

8.3.2 Nanodevice simulation Besides the abovementioned molecular modeling methods, various methods of simulation have been also developed exclusively for specific nanostructures and nanodevices. In this case, the researcher is only concerned with the behavior of the device studied, and a set of mathematical equations can provide the desired answers. The simulation techniques have now been predictive in nature, and most of the new ideas and prototypes have been first suggested by modeling and simulations accompanied by their fabrication, realization, or testing in experiments. Computer simulations may be useful for analyzing surface morphology and recognizing even the slightest change in atom location near edges, corners, surface steps, and irregularities. The interatomic dynamics of the interactions between nanoparticles and biological molecules are not well known. Understanding the mechanisms of these interactions can assist stable nanomaterial development and use. Gusso et al. [13] used a set of equations to model the charge transport in organic LEDs of multiple layers, which is an example of nanostructures simulation. The outcomes obtained by the developed simulator were confirmed by previously conducted experiments. The quantum-dot cellular automata (QCA) simulator developed at the University of Calgary is another example of the simulation of nanodevices [14]. The theoretical study of the light emission on coreshell quantum dots and the pressure and temperature analysis in optical properties in a narrow bandgap quantum-dot are also considered [15,16]. Another example is the simulation of nanostructures developed by Johnson for the simulations of photonic crystals [17]. The program calculates the band structures and electromagnetic modes of periodic dielectric structures. This simulator can also be used for simulating resonant systems and waveguides. Firstprinciples calculations are also used to examine the transport properties of nanostructures [18]. In addition to the above examples, a large number of other nanostructure and nanodevice simulators have been reported in the literature. The site provides a set of simulators of different types [19], and some of them can be used online and others can be obtained from the website and run on personal computers. Other examples are Nextnano [20], ADEPT [21], and SimOLED.

8.3.3 Nanoinformatics Technological advances over the past decades have led to the generation, storage, and distribution of large amounts of data. Several research areas have sought efficient alternatives to handle this large wave of information, in order to filter and work with the data that are really essential. As has been previously noted, the new technological revolution provided by nanoscience and nanotechnology already has the support obtained from the fundamental advancement of computing. In addition to the increased process capacity of computers, researchers are able to model and simulate possible nanomaterials, nanodevices, and applications by designing computing models for chemical, physics, and biological systems. In addition to simulations and nanostructure modeling, computational support to the nanoscience and nanotechnology can also be provided by a storage infrastructure, interpretation, manipulation, exchange, and dissemination of data and research information. This is a new field of research within computational nanotechnology and has been called “Nanoinformatics.” Nanoinformatics is an emerging area of computational nanotechnology and has great potential for assessing the efficacy of nanomaterials. This is based on nanotechnology, bioinformatics, and computational chemistry. This is at the conjunction of the techniques of classical bioinformatics and computational chemistry to archive, optimize, analyze, and visualize nanotechnological data. It includes the formation of databases to coordinate


Applications of Computational Intelligence in Multi-Disciplinary Research

nanotechnology-related data. Such databases include modeled nanomaterial configurations and molecular information, such as the biomolecular structure relating to the effect of nanomaterials on biological systems [22]. Nanoinformatics is needed for the comparative characterization and intelligent development of nanomaterials; production and use of engineered nanodevices and nanosystems; development of advanced manufacturing and instrumentation processes; and the safety and wellbeing of those involved in the field. Nanoscience and nanotechnology is a research area that relies heavily on experimentation, computation, simulation, and network communication. Moreover, it is an area where, despite the large amount of data being generated by research laboratories in academia and industry, there is a lack of reliable data that are standardized and can be effectively shared [23]. This deficiency can be solved through the coordinated development of tools and methods of nanoinformatics. These tools and methods allow members of the community to verify data and provide programs developed by others in nanotechnology research and development, always respecting the confidentiality terms. These data can guide the design of new products and integration of nanotechnology in large industrial-scale processes, in addition to the analysis of the environmental, health, and safety impacts of nanomaterials produced. To address these issues of collection, handling, and sharing of information, new approaches of development, analysis, and access to databases should be explored and established. The tools and experiences in the field of information science that have been actively used in managing data from other domains will be extremely useful in guiding the progress of nanoinformatics. However, it is important to note that the area of nanoscience and nanotechnology has its own peculiarities, with the most important of them being the strongly multidisciplinary character. Therefore the development of tools that meet actors with distinct vocabularies is one of the major challenges faced in developing nanoinformatics [24]. Nanoinformatics is necessary to develop and compare nanomaterials intelligently, to design and use optimized nanodevices and nanosystems, to develop advanced instruments and manufacturing processes, and to ensure the environmental protection and health of the participants. Nanoinformatics also fosters efficient scientific discovery through the techniques of data mining, machine learning, and optimization. It also involves the use of tools of communication networks for the efficient exchange and provision of important information efficiently. In the last two decades, the exploration of large-scale data began combining experiments that generate data and computer science, using massive networks of computers, tools, information science, and even social networking technologies. These projects preceded the beginning of the era of e-Science [25]. Popular examples of e-Science are the “Human Genome Project” [26] and the “Sloan Digital Sky Survey.” [27]. These projects demonstrate how sophisticated computer systems and the coordination of domain experts can harmonize to solve large scientific challenges. The application of computers aids in nanoscience and nanotechnology speeds and allows the development of innovative nanostructures and nanodevices both in basic research and in industrial production. In the scientific and academic area, nanoinformatics allows researchers to more efficiently exploit the discoveries of others in support of their own investigations, and to expand the impact of their research. The lifecycle of the research is based on pre and postpublication, which is consistent with the collection, preparation, analysis, and dissemination of findings. However, the information is arranged in different places and not in integrated platforms. This fact is even more pronounced when considering multidisciplinary research fields, such as nanoscience and nanotechnology. For example, experiments may be conducted by high-performance computational tools that can capture patterns and information from a data set. Using mapping, visualization, and advanced analysis tools, a researcher can discover important information that can direct the search for new directions. These results, made possible by computer systems, will easily allow complex systems that can hardly be understood solely by traditional science to be explored and implemented. The communication tools and networked collaboration interconnect all components of the experimental lifecycle. The industrial area can also be greatly benefited by nanoinformatics. Activities guided by data in the industry can accelerate product design; improve performance; and enhance the confidence and manufacturing, logistics, quality control, safety, and the business model. Access to more comprehensive datasets than those available in the limited scientific publications can allow feasibility studies and design activities in the industry (Fig. 8.4). Nanoinformatics tools can simplify the flow of activities and lessen the testing time available to the market. Moreover, the tools may allow efforts to ensure sustainability and minimize the risks to health and the environment. FIGURE 8.4 Tools of nanoinformatics.

Nanotechnology and applications Chapter | 8


These changes envisioned for the research, production, and sustainability of nanoscience, and nanotechnology will have a great impact and cannot be fully implemented immediately. Nanoinformatics tools existing today are still premature and cultural changes involving the primary users, that is, experimental scientists, need time to mature.

8.3.4 High-performance computing The use of computational tools in the development of nanoscience and nanotechnology is only possible thanks to the large computational progress achieved in recent years. Much of the simulations, especially those involving molecular modeling, require high processing power. The use of field-programmable gate arrays (FPGAs) consists of programming the desired calculation in dedicated hardware, significantly increasing its performance. The more traditional alternative to expensive computation is the use of clusters of computers. These consist of a group of computers that work together and which can be viewed as a single powerful computer. The components of a cluster of computers are commonly connected to each other by a fast network. These clusters are designed to enhance the performance of individual computers. A grid of computers is an alternative similar to the cluster described above. The main difference is that this new alternative gathers computers not fully interconnected and which may be geographically dispersed. So they are not seen as a single computer. In this case, the grids can be formed by computers more heterogeneously than those used in the clusters. In the last decade, there was a need to develop high-performance graphics cards for use in games. This demand has been met through new hardware architectures in graphics processing units (GPUs), such as those found in the Sony PlayStation 3 and Nvidia GeForce 8800 GTX. Recently, [28] demonstrated the use of GPUs for the calculation of repulsion integrals of two electrons, obtaining an acceleration of more than 130 times when compared to the calculations made in a traditional AMD Opteron CPU. The Coulomb matrix for a molecule of DNA was calculated to be 19.8 seconds against 1600 seconds required by the AMD Opteron CPU. In addition to the technologies already employed, the advancement of nanotechnology would make it possible to create new processor architectures much faster than those currently in use. On the other hand, this advancement will help the development of nanostructures simulators, enabling great technological progress. Among the new architectures and computing paradigms obtained by nanotechnology, we can highlight the carbon nanotubes field-effect transistors [29] and QCA [30].

8.3.5 Computational intelligence Computational intelligence is a branch of computer science that uses algorithms and techniques that simulate certain cognitive abilities such as recognition, learning, and development, to create intelligent programs. Computational intelligence offers various techniques inspired by nature, such as GAs, ANNs, and fuzzy systems, among others, employed in the advancement of intelligent systems. Genetic algorithms It is a search and optimization method inspired by the principles of Darwinian natural selection and genetic reproduction, which favor the fittest individuals living longer and therefore more likely to reproduce. GAs utilize a highly complex interpretation of evolutionary mechanisms to develop approaches to individual problems. Every GA works on artificial chromosomal populations. They are strings of a finite (generally binary) alphabet. Every chromosome is a solution to a problem, and it has a fitness, a real number that is a measure of how good a solution is to the problem. Beginning from a randomly produced chromosomal population, GA conducts a fitness-based selection and recombination cycle in order to create the next generation, a successor population. Parental chromosomes are chosen through recombination, and their genetic material is recombined to generate chromosomes in offspring, and thus all pass through the population of the successor. A sequence of successive generations develops as this method is iterated, and the average fitness of the chromosomes appears to rise unless some stop criterion is met. Therefore a GA “evolves” the best solution to a given problem [31]. Essentially, GAs are highly parallel search and optimization methods inspired by the principles of Darwinian natural selection and genetic reproduction, which favor the fittest individuals living longer and therefore more likely to reproduce. These algorithms are inspired by the genetic processes of biological organisms to search for optimal solutions. To do so, it proceeds as follows: each potential solution to a problem can be encoded in a structure called “chromosome,” which consists of a string of bits or symbols. So these chromosomes represent individuals that are evolved over several generations, similar to living beings, according to the principles of natural selection and survival of the fittest, as described by Charles Darwin in his book “The Origin of Species.” Simulating these processes, GAs are able to “evolve” solutions to real-world problems. The evolution process starts with the creation of random individuals (solutions) that


Applications of Computational Intelligence in Multi-Disciplinary Research

will form the initial population. From a selection process based on the fitness of each individual, individuals are chosen for the reproduction phase, which creates new solutions using for this a set of genetic operators (crossover and mutation, basically). These new solutions will be evaluated, and their skills will determine one’s probability of staying in subsequent generations. The stop condition of the algorithm can be determined in several ways: the number of generations; the number of individuals created; getting a given evaluation value, that is, an optimum; the processing time; and the degree of similarity among individuals in a population (indicating convergence) (Fig. 8.5). Artificial neural networks It is a computational system designated to model how the human brain analyzes information and processes it. It is encouraged by the structure and behavior of the human brain and has been used in a diversity of problems. It is a computational model that is biologically based, created from hundreds of individual units, artificial neurons, linked with coefficients (weights) that constitute the neural structure. As they process data, they are also classified as processing elements (PEs). Each PE has weighted inputs, function transition, and one output. PE is in fact an algorithm that combines inputs and outputs. The artificial neuron is ANN’s building element modeled to mimic the biological neuron activity. The emerging signals, known as inputs, are initially combined, multiplied by the connection weights, and then processed via a transfer mechanism to generate the output for that neuron. The fact that neurons are associated with one another greatly affects the functioning of the ANN. ANNs are computational models stimulated by the nonlinear structure of existing interconnected neurons in the human brain, being able to perform the following operations: learning, association, generalization, and abstraction. Neural networks consist of various highly interconnected PEs (artificial neurons) to perform simple operations, transmitting the results to neighboring processors. The capability of neural networks to perform nonlinear mappings between the inputs and outputs has become prosperous in pattern recognition and modeling of complex systems. Due to their structure, neural networks are quite effective in learning patterns from nonlinear, incomplete, or noisy data. In the literature, one can find many types of neural networks with different architectures and learning algorithms. An ANN consists of interconnected neurons and can be topologically distinct. This topology is known as multilayer perceptron (MLP). An ANN with only one hidden layer is able to approach any continuous function with specific precision. The basis of this statement is that any continual operation, restricted in an interval, can be approximated as a linear superposition of sines that can be mapped with hidden layer neurons, as the activation function of the neurons is the log-sigmoid one. To approach a nonlinear function, two hidden layers are needed. In order to achieve a good input/output relationship, a training algorithm adjusts the ANN’s weights through error minimization between the network output and the target. Many such input/target samples, known as patterns, are needed to train a network. This data is categorized into three sets: training, validation, and test. The first one is the patterns presented to ANN in order to optimize the network’s weights, minimizing the error. The validation set intends to give generalization ability to the model, avoiding overfitting. Finally, the test set, with a database never presented before to the ANN, is used to measure the effective quality of data fitting. The key benefit of an ANN over other interpolation methods is its ability to model systems with a very strong nonlinear behavior [31] (Fig. 8.6). Fuzzy system It is the reasoning method that represents logical cognition. It takes incomplete, ambiguous, distorted, or inaccurate (fuzzy) inputs and generates a precise output. Fuzzy system modeling is concerned with complicated, unclearly defined, and unpredictable structures that may fail to explain adequate results from traditional mathematical models. Wellknown fuzzy systems are built on fuzzy sets and logic and are incredibly beneficial for systems where ambiguous ideas FIGURE 8.5 Flowchart of genetic algorithms.

Nanotechnology and applications Chapter | 8


FIGURE 8.6 An artificial neural network.

FIGURE 8.7 Architecture of fuzzy logic.

are present. Like most fuzzy models of fuzzy systems, fuzzy rule bases are utilized for inputoutput parameters along with membership features of fuzzy linguistic terminology. Within such models, the relationships between inputs and outputs are described by IF... THEN rules. Fuzzy rule bases consist of a set of fuzzy operators to fuzzify, aggregate, and then map the input membership values to the output domain, followed by the aggregation and defuzzification of the output variables membership values. It helps in identification of fuzzy sets, variables, form and amount of membership functions, parameters, etc. [32]. Combining these smart technologies with nanotechnology will minimize the time and expense taken to create effective tools. Intelligent computational nanotechnology appears to be a basic computational technique for the creation of modern nano design tools, including the utilization of other computational process shavings for current engineering systems projects (e.g., cars, ships, planes, and microdevice-integrated circuits). Set theory and probability theory are the most common theories to tackle inaccuracy and uncertainty. These theories, although useful, are not always able to capture the wealth of information provided by humans. Humans are capable of handling very complex processes, based on inaccurate or approximate information. The strategy adopted by human operators is also imprecise and generally can be expressed in linguistic terms. The theory of fuzzy sets and fuzzy logic concepts can be used to translate mathematically imprecise information expressed by a set of linguistic rules. The result is a system based on rules of inference, in which the theory of fuzzy sets and fuzzy logic provides the mathematical tools for dealing with such linguistic rules [33]. The fuzzy set theory was first introduced in 1996 by Zadeh who noted the impossibility of modeling systems with poorly defined borders through the rigid and precise mathematical approaches of classical methods. Fuzzy sets theory provides a mathematical framework that allows working with the vagueness and uncertainty information provided by humans. This theory is increasingly used in systems that use human knowledge, and it has shown good results in different implementations [3436]. The fuzzy set theory, when used together with concepts of logic, results in so-called fuzzy inference systems. However, when used for arithmetic operations, fuzzy sets are known as fuzzy numbers. A fuzzy set is a bridge connecting the imprecise concept with its numerical modeling, assigning a value between 0 and 1 for each element in the universe, thus representing the degree of relevance of the individual fuzzy set (Fig. 8.7).


Applications of computational nanotechnology

Nanotechnology is truly a multidisciplinary field, spanning many core physical sciences and engineering areas. The purpose of computational nanotechnology, that is, simulation and modeling of nanomaterials, devices, and applications


Applications of Computational Intelligence in Multi-Disciplinary Research

focused on physics and chemistry, has been substantial in progressing those boundaries. Computational nanotechnology has served a significant role in explaining some recent experimental observations and predicting structures (or properties) that were later fabricated (or measured) in experiments.

8.4.1 Nanotube-based sensors and actuators Carbon nanotubes have different electronic properties depending on their chirality vector, ranging from metals to semiconductors (1 eV bandgap). Recent theoretical and experimental works have proven that single-walled nanotubes (SWNTs) are extremely sensitive to gas molecules. In the experiments, a semiconducting nanotube’s conductivity changes as the nanotube is exposed to a minuscule amount of certain gas molecules. In the ab initio simulations, gas molecules induce a charge transfer, which causes doping effects on semiconducting nanotubes. The gas molecules are adsorbed into the nanotube’s surface, and each molecule induces small amounts (about 0.1 e) of electron transfer so that the nanotube becomes a p-type doped semiconductor. Experiments have indicated that the nanotube sensor at room temperature can detect ppm-level gas molecules, which opens the possibility of developing a nanotube biosensor operating at physiological temperatures. An ab initio analysis of water adsorbed on an SWNT shows a purely repulsive interaction without any charge transfer; an SWNT can be fully immersed in water and maintain its intrinsic electronic properties [37].

8.4.2 Nanoinformatics for drugs Nanoinformatics techniques have many benefits over experimental approaches for drugs and future therapeutic development. There are several ethical issues in using animal models because of the immense pain, distress, and death caused by the experiments/research activities on these animals. Animals are subjected to investigation or modified using techniques such as transgenic approaches to induce a diseased state for understanding the disease mechanisms or to assess the efficacy of therapeutic molecules against the disease. Computational methods offer a chance to precisely tailor the nanomaterials with the desired characteristics. The nanoinformatics approach also provides an advantage to study the specific function based on the interactions of nanomaterials on individual proteins and receptors. There are several computational tools for modeling the structure of nanomaterials. Nanoinformatics techniques focus on various simulation technologies and analytical methods such as ab initio methods to analyze not just their electronic properties and characteristics but also their relationships with biomolecules and the influences on biological processes including therapeutic applications. Statistical thermodynamics methods can be used to model relatively high molecular compositions of nanomaterial clusters, enzymes, and nanomaterial protein complexes. The utilization of these theoretical models would further improve our understanding of particular processes through which the nanomaterials in biological systems impart various behaviors.

8.4.3 Molecular docking Molecular docking is a vital approach in the structural support of molecular biology and computer-assisted drug designing. The purpose of ligandprotein docking is to determine the ligand’s prevailing binding mode(s) with a protein of recognized 3D configuration. Molecular docking has a number of uses and functions in drug development, like structureactivity research, lead optimization, identifying potential leads by virtual sampling, offering binding hypothesis to promote predictions for mutagenesis research, assisting x-ray crystallography in the binding of substrates and inhibitors to electron density, chemical mechanism studies, and combinatorial library design. Molecular docking research has the highest compatibility; the benefit of computer models helps us to remove obvious inappropriate compositions and build a fast energy and nanoformulation structure methodology based on ionic interaction [38].

8.4.4 Nanotoxicology Nanotoxicology analyzes nanomaterial toxicity and has been widely used in biomedical research to investigate the toxicity of different biological systems. Researching biological systems using in vivo and in vitro approaches is costly and time consuming; so computational toxicology, a multidisciplinary approach, uses computing resources and algorithms to analyze biological system toxicology. Computational toxicology is a model or algorithm utilizing mathematics and computers to simulate adverse effects and to provide a deeper understanding by which a particular chemical triggers damage. It targets to provide tools for investigating chemical toxicity levels. Depending on the chemical configurations and experimental observations, a number of models in silico can be proposed to estimate the absorption and metabolism of chemicals in the human body, approximate in vitro and in vivo responses, and approximate human risk [39].

Nanotechnology and applications Chapter | 8


MD simulations of biomolecules like proteins and DNA are common for computational toxicology in order to analyze relationships between biological systems and chemicals. MD simulations are utilized to analyze the physicochemical properties of nanomaterials including the melting of aluminum nanoparticles [40]; to analyze the nucleation and melting behaviors of different metallic nanoparticles; to determine the structural and physicochemical properties of metal oxides and semiconductor nanoparticles; and to analyze relationships among both nanoparticles and biological materials around them. However, MD simulations could be utilized as a supplementary method to experimental methods such as nuclear magnetic resonance (NMR) spectrometry and x-ray crystallography in structural biology, as MD simulations provide enough information of experimental values at the atomic level as well as provide kinetics and thermodynamic parameters of concern to biomedical systems [41]. Along with computational toxicology and nanotoxicology, MD simulations are also implemented quite promisingly in the area of drug designing, protein analysis, and structural biology. In this overview, we included the summary of computational toxicology and nanotoxicology MD simulations, procedures for operating MD simulations, widely used software methods, and applications of MD simulations.

8.4.5 Other applications Method



Molecular mechanics

Atomic forces are evaluated as a recommended mechanism of interatomic potential (called force field in chemistry). In principle, the potential should have several variables that are derived by fitting the data from observations or experimentation, such as bond length, atomic charge and radius, dihedral, and angle. At a quantum level, a system’s total energy surface is determined. Utilizing a timeindependent Schro¨dinger equation, the coordinates of the nuclei and the related system electrons are revised. Since the computing work is immense, it is only ideal for low scale small systems. Molecular mechanics (MM) cannot build and alter bonds and is therefore inappropriate for studying chemical processes. Conventional quantum mechanics (QM) can analyze chemical processes; however, its computational expenses are too high for large biomolecules to be scaled up. Hybrid QM/MM techniques have jeopardized the ability to analyze reactions using QM and MM rates and have become famous.


Ab initio






This chapter provides an analysis of computational intelligence in the development of nanotechnology and its applications. Such support and development of intelligent algorithms and tools are here called “computational nanotechnology.” Several works are here described in order to look into the benefits of computational intelligence in the characterization and design of nanostructures, nanodevices, and nanosystems.

References [1] D. Ranjan, A.K. Tripathi, Computational nanotechnology: an assessment, Digest Journal of Nanomaterials & Biostructures (DJNB) 4 (1) (2009). [2] D. Srivastava, S.N. Atluri, Computational nanotechnology: a current perspective, Computer Modeling in Engineering and Sciences 3 (5) (2002) 531538. [3] L.S. Chaudhary, P.R. Ghatmale, S.S. Chavan, Review on application of nanotechnology in computer science, International Journal of Science and Research IJSR 5 (2) (2013) 15421545. [4] A. Fabara, S. Cuesta, F. Pilaquinga, L. Meneses, Computational modeling of the interaction of silver nanoparticles with the lipid layer of the skin, Journal of Nanotechnology 2018 (2018). [5] A. Gajewicz, B. Rasulev, T.C. Dinadayalane, P. Urbaszek, T. Puzyn, D. Leszczynska, et al., Advancing risk assessment of engineered nanomaterials: application of computational approaches, Advanced Drug Delivery Reviews 64 (15) (2012) 16631693. [6] J.A. Simpson, E.S. Weiner, The Oxford English Dictionary, Clarendon Press, 1989. [7] J. Foresman, E. Frish, Exploring Chemistry, Gaussian Inc, Pittsburg, 1996. [8] K. Kadau, T.C. Germann, P.S. Lomdahl, Molecular dynamics comes of age: 320 billion atom simulation on BlueGene/L, International Journal of Modern Physics C 17 (12) (2006) 17551761. [9] A. Arkhipov, P.L. Freddolino, K. Schulten, Stability and dynamics of virus capsids described by coarse-grained modeling, Structure 14 (12) (2006) 17671777. [10] A.R. Leach, A.R. Leach, Molecular Modelling: Principles and Applications, Pearson Education, 2001.


Applications of Computational Intelligence in Multi-Disciplinary Research

[11] P. Hohenberg, W. Kohn, Physical Review 136 (1964) B864. [12] J.M. Goodman, R.S. Paton, Enantioselectivity in the boron aldol reactions of methyl ketones, Chemical Communications 21 (2007) 21242126. [13] A. Gusso, D. Ma, I.A. Hu¨mmelgen, M.G.E. Da Luz, Modeling of organic light-emitting diodes with graded concentration in the emissive multilayer, Journal of Applied Physics 95 (4) (2004) 20562062. [14] K. Walus, T.J. Dysart, G.A. Jullien, R.A. Budiman, QCADesigner: a rapid design and simulation tool for quantum-dot cellular automata, IEEE Transactions on Nanotechnology 3 (1) (2004) 2631. [15] A. Elsayed, S. Wageh, A.A. El-Azm, Theoretical study of hybrid light emitting device based on coreshell quantum dots as an emission layer, Quantum Matter 2 (2) (2013) 109115. [16] M. Narayanan, A.J. Peter, Pressure and temperature induced non-linear optical properties in a narrow band gap quantum dot, Quantum Matter 1 (1) (2012) 5358. [17] S.G. Johnson, J.D. Joannopoulos, The MIT Photonic-Bands Package, , . , 2013. [18] T. Ono, Y. Fujimoto, S. Tsukamoto, First-principles calculation methods for obtaining scattering waves to investigate transport properties of nanostructures, Quantum Matter 1 (1) (2012) 419. [19] S. Goasguen, K. Madhavan, M. McLennan, M. Lundstorm, G. Klimeck, Workshop on Science Gateways: Common Community Interfaces to Grid Resources, GGF, Chicago, 2005, p. 14. [20] NextNANO, NextNANO. ,, 2011. [21] J.L. Gray, Photovoltaic Specialists Conference, Conference Record of the Twenty, Second IEEE, Las Vegas, 1991, p. 436. [22] R.N. Krishnaraj, D. Samanta, R.K. Sani, Computational nanotechnology: A tool for screening therapeutic nanomaterials against Alzheimer’s disease, Computational Modeling of Drugs Against Alzheimer’s Disease, Humana Press, New York, NY, 2018, pp. 613635. [23] I. Linkov, F.K. Satterstrom, J.C. Monica Jr, S. Foss, Nano risk governance: Current developments and future perspectives, Nanotechnology Law & Business 6 (2009) 203. [24] D. de la Iglesia, S. Harper, M.D. Hoover, F. Klaessig, P. Lippell, B. Maddux, et al. Nanoinformatics 2020 roadmap, 2011. [25] A.J.G. Hey, S. Tansley, K.M. Tolle, in: A.J. Hey (Ed.), The Fourth Paradigm: Data-intensive Scientific Discovery, 1, Microsoft Research, Redmond, WA, 2009. [26] , . [27] , . [28] I.S. Ufimtsev, T.J. Martinez, Quantum chemistry on graphical processing units. 1. Strategies for two-electron integral evaluation, Journal of Chemical Theory and Computation 4 (2) (2008) 222231. [29] B.C. Paul, S. Fujita, M. Okajima, T. Lee, Modeling and analysis of circuit performance of ballistic CNFET, in: Proceedings of the 43rd Annual Design Automation Conference, 2006 July, pp. 717722. [30] C.S. Lent, P.D. Tougaw, A device architecture for computing with quantum dots, Proceedings of the IEEE 85 (4) (1997) 541557. [31] J. McCall, Genetic algorithms for modelling and optimisation, Journal of computational and Applied Mathematics 184 (1) (2005) 205222. [32] S. Agatonovic-Kustrin, R. Beresford, Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research, Journal of Pharmaceutical and Biomedical Analysis 22 (5) (2000) 717727. [33] A. Celikyilmaz, I.B. Turksen, Introduction, in: Studies in Fuzziness and Soft Computing, Springer, 2009, pp. 110. [34] W. Pedrycz, F. Gomide, An Introduction to Fuzzy Sets: Analysis and Design, MIT Press, 1998. [35] L.A. Zadeh, G.J. Klir, B. Yuan, Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems: Selected Papers, 6, World Scientific, 1996. [36] H.J. Zimmermann, Fuzzy Set Theory—and its Applications, Springer Science & Business Media, 2011. [37] G. Bojadziev, M. Bojadziev, Fuzzy Logic for Business, Finance, and Management, 23, World Scientific, 2007. [38] D. Srivastava, M. Menon, K. Cho, Computational nanotechnology with carbon nanotubes and fullerenes, Computing in Science & Engineering 3 (4) (2001) 4255. [39] P. Yadav, A. Bandyopadhyay, A. Chakraborty, K. Sarkar, Enhancement of anticancer activity and drug delivery of chitosan-curcumin nanoparticle via molecular docking and simulation analysis, Carbohydrate Polymers 182 (2018) 188198. [40] C. Selvaraj, S. Sakkiah, W. Tong, H. Hong, Molecular dynamics simulations and applications in computational toxicology and nanotoxicology, Food and Chemical Toxicology 112 (2018) 495506. [41] S. Alavi, D.L. Thompson, Molecular dynamics simulations of the melting of aluminum nanoparticles, The Journal of Physical Chemistry A 110 (4) (2006) 15181523. [42] J.D. Durrant, J.A. McCammon, Molecular dynamics simulations and drug discovery, BMC Biology 9 (1) (2011) 19. [43] B.J. Alder, T.E. Wainwright, Studies in molecular dynamics. I. General method, The Journal of Chemical Physics 31 (2) (1959) 459466. [44] M.P. Allen, D. Frenkel, J. Talbot, Molecular dynamics simulation using hard particles, Computer Physics Reports 9 (6) (1989) 301353. [45] D. Vlachakis, E. Bencurova, N. Papangelopoulos, S. Kossida, Current state-of-the-art molecular dynamics methods and applications, Advances in Protein Chemistry and Structural Biology 94 (2014) 269313. [46] G. Kresse, J. Hafner, Ab initio molecular dynamics for liquid metals, Physical Review B 47 (1) (1993) 558. [47] D. Marx, J. Hutter, Ab initio molecular dynamics: theory and implementation, Modern Methods and Algorithms of Quantum Chemistry 1 (301449) (2000) 141. [48] J.S. Tse, Ab initio molecular dynamics with density functional theory, Annual Review of Physical Chemistry 53 (1) (2002) 249290.

Nanotechnology and applications Chapter | 8


[49] A. Warshel, M. Levitt, Theoretical studies of enzymic reactions: dielectric, electrostatic and steric stabilization of the carbonium ion in the reaction of lysozyme, Journal of Molecular Biology 103 (2) (1976) 227249. [50] R.A. Friesner, V. Guallar, Ab initio quantum chemical and mixed quantum mechanics/molecular mechanics (QM/MM) methods for studying enzymatic catalysis, Annual Review of Physical Chemistry 56 (2005) 389427. [51] R. Nussinov, The significance of the 2013 Nobel Prize in Chemistry and the challenges ahead, PLoS Computational Biology 10 (1) (2014) e1003423.

This page intentionally left blank

Chapter 9

Advances of nanotechnology in plant development and crop protection Rokeya Akter1, Md. Habibur Rahman2, Md. Arifur Rahman Chowdhury1,3, Manirujjaman Manirujjaman4 and Shimaa E. Elshenawy5 1

Department of Pharmacy, Jagannath University, Dhaka, Bangladesh, 2Department of Pharmacy, Southeast University, Dhaka, Bangladesh,


Department of Bioactive Materials Science, Jeonbuk National University, Jeoju, South Korea, 4Institute of Health and Biomedical Innovation (IHBI),

School of Clinical Sciences, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia, 5Center of Stem Cell and Regenerative Medicine, Zewail City for Science, Zewail, Egypt



Over the last few years, significant technological advancements and developments over agriculture have been made to tackle the growing demands of sustainable development and food health [1,2]. These ongoing farm developments are vital to satisfying the increasing demand for food through the utilization of the natural or synthetic capital of the world population. Nanotechnology, in particular, can have useful solutions to the different problems relevant to agriculture. Nanoparticles (NPs) are of considerable scientific importance to cross the distance between bulk materials and atomic or molecular structures. A significant number of studies have been conducted on nanotechnology over the last two decades and its multiple uses in the agriculture sectors have been emphasized [36]. The usage of fertilizers plays a vital role in the rise in farm output; the improper use of fertilizers changes the chemical environment of soils irreversibly and thus limits the region open to crop development. Sustainable cultivation means limiting the usage of agrochemicals that can preserve the ecosystem and avoid the extinction of different organisms. In particular, nanomaterials increase crop profitability by growing the efficacy of agricultural inputs to promote a regulated, site-specific supply of nutrients to ensure minimum use of agriinputs. The aid provided by nanotechnology to plant defense products has grown exponentially, thereby ensuring an improvement in crop production. Therefore the key issue for agricultural development is to allow rapid adaptation of the plants to progressive climatic change factors including severe temperatures, water scarcity, salinity, and alkalinity without disrupting already-fragile habitats with the use of toxic metals [5]. Moreover, to quantify and track cultivation production, soil conditions, diseases, usage and penetration of agrochemical and environmental pollutants, nanosensing development and activity have significantly enhanced the human regulation of soil and plant safety, quality control, and protection monitoring, all of which contribute greatly to sustainable farming, and environmental systems [4]. Nanomaterials engineering is the leading track of work promoting high-tech farming by providing a wider specific field, which is key to the sustainable growth of agriculture systems [7,8]. Nanotechnology will also not only minimize complexity but can also coordinate agricultural growth management approaches as an option to traditional technologies. In certain situations, the challenges encountered in new, commercial agriculture are solved in a techno-fixed manner within a short term. This study describes nanotechnology applications in agriculture that could ensure the sustainability of agriculture and the ecosystem.


Agriculture’s nanofarming: a modern frontier

Engineering NPs is one of the new technological advances that display special, high-strength characteristics. Norio Taniguchi, a researcher at the Tokyo University of Research, first coined the word ‘nanotechnology’ in 1974. The idea that NPs may be of interest in agricultural production has long ago been adopted in multiple disciplines, but it is a recent technical breakthrough and is experiencing progressive growth. Recent advances in the manufacturing of nanomaterials in Applications of Computational Intelligence in Multi-Disciplinary Research. DOI: © 2022 Elsevier Inc. All rights reserved.



Applications of Computational Intelligence in Multi-Disciplinary Research

different sizes and shapes have culminated in a wide variety of applications in medicine, environmental research, cultivation, and food processing. Agriculture has also been the source of such developments throughout history [4]. In light of the many challenge-stricken crop yields raised by biotic or abiotic pressures, including nutrient deficiency and ecosystem pollution, agriculture has also produced exciting applications for precision agriculture. The application of nanotechnology offers (Fig. 9.1). In recent years, the word “accuracy” has been established for the production, monitoring, and regulation of agricultural practices through wireless networking and the miniaturization of the sensors. In particular, the management of crops on site is connected with a broad variety of agriculture facets, from pre- and postproduction crops to field crops [9]. Clustered frequently interspaced short palindrome repetitions (CRISPR)/Cas (CRISPR-related protein) mRNA and sgRNA for genetic engineering in crop species are prominent scientist’s achievements. The recent developments in tissue and engineered nanomaterials dependent targeted distribution are described in Refs. [1012]. Nanotechnology provides outstanding approaches to the increasing range of environmental challenges. The production of nanosensors, for example, offers comprehensive opportunities for tracking environmental tension and growing plants’ fighting capacity against disease [13,14]. There are therefore considerable opportunities for large social and equal benefits from these continuous advances in nanotechnology with a specific emphasis on problem detection and collective solutions to sustainable agricultural development.


Synthesis of green nanoparticles and its sources

NPs, or at least one dimension (1100 nm) of biological, inorganic, or synthetic materials, are in the nanoscale range. Photochemical reactions, volcanic eruptions, forest fires, basic deforestation, plants and animals, or even microorganisms

FIGURE 9.1 Applications of nanotechnology in agriculture.

Advances of nanotechnology in plant development and crop protection Chapter | 9


that live in the natural environment may create NPs [15,16]. Plant- and microorganism-derived NP products have emerged as an effective biological source for green NPs and, because of their environmentally friendly existence and their simplicity of manufacturing in comparison to other routes, have recently drawn particular attention from scientists [6,1719]. A variety of plant species and microorganisms, including bacteria, algae, and fungi, are currently being used for NP synthesis for the use of green nanotechnology. Gold nanoparts are used, for example, to formulate Medicago sativa and the plant species of Sesbania. Inorganic nanomaterials consisting of silver, nickel, cobalt, zinc, and copper can, like Brassica juncea, M. sativa, and Helianthus annus, also be synthesized within live plants [2022]. Clostridium aerobic and Klebsiella aerogens are used to synthesize NPs of carbon, iron, and zinc sulfite, and cadmium sulfite respectively, with microorganisms including diatoms, Pseudomonas stuzeri, and Desulfovibrio desulfuricans NCIMB 8307. Although several microorganisms are used in the synthesis of green NPs, fungi are considered to be the most effective metal and metal sulfide biosynthesis systems containing NP, particularly Verticillium sp., Aspergillus flavus, Aspergarillus furnigatus, Phanerochaete chrysoporium, and Fusarium oxysporum [22]. The 3D items are all NPs. One-dimensional NPs are referred to as NPs with 2 nanoscale and 1 macroscale dimensions (nanofilms, NPs), while 2D parcels are with 1 nanoscale and 2 macroscale measurements (NPs, nanofilms). Once more, 3D NPs have 0 dimensions at the nanoscale and 3 macroscale dimensions (nanoballs, nanoflowers), while the NPs at the nanoscale have all 3 dimensions [23,24]. For example, to synthesize or generate zero-dimensional NP with well-defined dimensions, numerous physical and chemical methods have been developed [25]. Nondynamic NPs, including quantum dots, are commonly appropriate and apply to light-emitting diodes, solar cells, and single-electron transistors as found in lasers [2628]. The main field of nanoengineering work has been the synthesis of two dimensions of NPs including interconnections (continual islands), rams, nanoprisms, nanoplates, nanosheets, and nanodisks [29]. The research on and the development of new applications in sensors, photocatalysts, nanocontainers, and nonreactors have blow up these geometric structures in NP [30]. In comparison with 3D NPs, the large surface area and other superior properties such as sites for absorbance of all molecules involved in the limited space that contribute to better transport of the molecules have recently attracted considerable research attention [3134]. The advancement and growth of modern NP processing technologies and their future implementations therefore take on great significance, in particular in the creation of sustainable agricultural and environmental systems [35,36].

9.4 Good distribution possibilities allowed by nanoparticles: a modern sustainable agriculture portal Nanotechnology is recognized as an essential technology in the 21st century which promises to promote conventional farm practices and give sustainable development through the improvement of tactics for management and conservation with reduced input waste [37]. Important aspects of productive agriculture and precision farming of agrochemicals and organic molecules like the movement of DNA molecules or oligonucleotides in plant cells are given in Ref. [38]. Agrochemicals are usually sprayed and/or diffused on crops in traditional processes. As a consequence, the target sites of plants are hit by very few agrochemicals, which far fall below the lowest effective amount needed for productive plant production. The loss is due to the release of chemicals to photolytic degradation, hydrolysis, and microbial damage [39,40]. For starters, the biodisposability of nutrients, due to soil chelation impact, microorganism depletion, evaporation, overapplication, hydrolysis, and runoff problems should be given greater importance throughout the application of fertilizers [41]. The quality enhancement of spray drift control will be the basis in the cases of pesticide applications [7]. In order to maintain organic farming activities, careful attention has thus been paid to farming agriculture and to the recent development in nanotech-based synthesis in minor or regulated releases of fertilizer, pesticides, and herbicides. Nanotechnology has increasingly moved from laboratory-based scientific research to being implemented and in action. Managed distribution technologies are to release, over some time, calculated amounts of required and adequate agrochemicals and to obtain complete bioskill to minimize losses and damage [42]. Because of their wide size, easy connection, and fast mass transfer, nanoparts give the advantages of powerful agrochemicals supply [43]. For these purposes, several processes, such as capsulation, absorption, surface ion, or low bond attachments or entanglement into the nana matrix of active ingredients, introduce medication or submicronic particles in agrochemical materials [44,45]. For example, potassium nitrate capsulation with grapheme oxide films greatly prolongs the fertilizer release cycle, and this formulation appears to be feasible in large-scale production at relatively low prices [46]. Nanomaterials improve agrochemical durability, defend against deterioration and eventual release into the atmosphere, and maximize productivity at the end of the day and reduce agrochemical amounts. In addition to agricultural uses, the integration of nanotechnology with biotechnology provides modern molecular transporter methods for altering genes and also for creating new species (Fig. 9.2) [47]. For starters, NPs, nanocapsules,


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 9.2 An overview of nanobiotechnology.

and nanofibers are used in nanobiotechnology to carry alien DNA and chemicals that make it possible to change the target genes. Viral gene delivery vectors are facing many challenges during genetic material transmission, such as restricted host selection, small size of injected genetic material, transport through cell membranes, and nucleus trafficking [21]. The new developments in nanobiotechnology, though, give researchers further chances to substitute one animal’s genetic material with another [48]. DNA fragments/sequences for target organisms have been established in genetic manipulation, for example, tobacco and maize plants, without adverse side effects. silicone dioxide NPs [49,50]. Insect-resistant new crop varieties may be produced utilizing an NP-assisted delivery method. For example, DNA-coated NPs are used to bomb cells or tissues to transfer the required genes to the target plants in the form of bullets in gene-gun technology [51,52]. Recent advances in small interfering RNA (siRNA)-infringed nuclear plants have contributed to a new plot that improves insecticide regulation, provided that chitosan has essential RNA-binding capabilities and the potential to infiltrate cell membranes [53]. CRISPR/Cas9 Single Lead RNA (sgRNA) has started a new age of genetic manipulation with contemporary nano-specific advances. A framework composed of the CRISPR spacer and the Cas proteins CRISPR/ Cas9 network is a prokaryotic RNA-direct protection mechanism and has been used successively in plant genome editing [54]. The low supply quality, however, remains a major obstacle to its application. Furthermore, the reliability and specificities of CRISPR/Cas structures may be decreased to a minimum by nanomaterials. For example, the distribution of Cas9En (E-tag)-RNP (ribonucleoproteins) sgRNA by cationic arginine golden nanocarbons (ArgNPs) provides approximately 30% of the successful cytoplasmic-nuclear gene editing efficiency in cultured cell lines, thereby greatly facilitating future research into crop production [55].


Nanofertilizers: a good food supply for crops

Generally speaking, the introduction of vital nutrients to increase crop production and soil fertility (element fertilization) is necessary [56]. Nevertheless, the exact management of fertilizer is considered one of the key preconditions for sustainable farm production [57,58]. Nutrition is, indeed, a fundamental human privilege. The worldwide problem of global food protection is severe. Partly because of the limitation of accessible natural capital, food protection is

Advances of nanotechnology in plant development and crop protection Chapter | 9


endangered. The present world population (7 billion) is estimated to rise over time to nearly 9 billion by 2050. About 60%100% more food is required to feed the growing population [59]. Ref. [60] presents an exploratory study on the development of women entrepreneurs: Indian cases. Intensive cultivation is being conducted to satisfy rising food demand, gradually contributing to a vicious period of land productivity degradation and diminishing agricultural returns. Approximate 40% of the world’s farmland has been badly depleted and soil productivity has been significantly diminished by these intense farming activities [61]. A large number of fertilizers are thus used to improve soil fertility and crop productivity [62,63]. It is also clearly observed that the fertilizer is responsible for one-third of agricultural production and the remainder relies on the output of certain agricultural inputs. However, traditional fertilizers’ nutrient efficiencies may not reach 30%40% [64]. For example, the productivity of nutrients used in traditional fertilizers, such as nitrogen (N) with 30%35% productivity, phosphorus (P) with 18%20% productivity, and potassium (K) with 35% 40% productivity, has remained constant for many decades [65]. Furthermore, the effectiveness of the nutrient usage of conventional fertilizers directly in the soil or sprayed on the floor depends in large part on the final concentration of fertilizers entering the targeted areas [66]. A very small amount of contaminants, scattering, drainage, hydrolysis, and evaporation, and even microbial degradation enter the desired region, and thus in reality, a concentration far below the minimum desired concentration is taken up [67]. This influences the soil’s normal mineral composition adversely through the improper use of chemical fertilizers. As a consequence, water supplies are becoming significantly polluted with the leakage of toxic substances into rivers and wetlands [66]. It has been recorded that at the beginning of 1970, just 27 kg NPK/ha was needed to generate 1 ton of grain, whereas in 2008, for the same degree of production, the requirement rose to 109 kg NPK/ha. World fertilizer use was dramatically increased, and global production was expected to hit 192.8 Mt by 201617 according to the International Association of Fertilizer Industry (IFIA) [68]. A big part of the chemicals lingers in the soil or invades certain environmental compartments, creating significant environmental contamination that could impact the usual growth of flora and fauna in the cases of massive quantities of synthetic fertilizers being present. In sustainable agriculture, the usage of modified nanomaterials has shown a radically different path to food processing to address the complexity in the crop field with minimal available capital [69]. When nanofertilizers lifted the expectation to meet the forecasts of global food production and sustainable agriculture, the development of verdant nanotechnology shifted the world’s agricultural focus radically. Nanofertilizers may be the best option to minimize macro- and micronutrient deficiency through improved productivity in nutrient usage and by resolving the chronic eutrophication issue [70]. Specifically synthesized nanofertilizers to monitor the release of nutrients according to crop specifications have tremendous potential and, at the same time, reduced difference losses. Conventional nitrogen fertilizers, for example, are seen to have major soil losses attributable to leaching, evaporation, or even depletion upto 50%70%, thereby lowering fertilizer output and increasing production costs [71,72]. Nanoformulations of nitrogenic fertilizers, on the other hand, synchronize fertilizer-N’s release with crop production. Nanoformulation thus prevents undesirable nutrient losses by direct contact with land, water, air, and microorganisms and prevents contact with nutrients and crops [73]. For example, the use of porous nanomaterials such as zeolite, clay, and chitosan decreases the loss of nitrogen dramatically by decreasing demand-driven releases and improving plant uptake processes [74,75]. The power of ammonium charge zeolite is that it can enhance phosphate’s solubility and thus increase the supply and use of phosphorus by crops [73]. Carbon-based nanomaterial graphene oxide films will prolong the cycle of the release of potassium nitrate to reduce losses due to leaching and to increase the duration of activity [76]. Sabir et al. [67] also showed an excellent effort to open up about the potentiality of nanomaterials in crop production over traditional fertilizers. We have demonstrated that the application of nanocalcite (CaCO340%) and utilizing nano-SiO2 (4%), MgO (1%), and Fe2O3 (1%) increases the absorption of Ca, Mg, and Fe, while also greatly increasing the intake of P by the micronutrients Zn and Mn. Nanofertilizers have several common types. Nanofertilizers may be categorized by their act into regulating fertilizers or slow-release fertilizers, loss fertilizers, magnetic fertilizers, or nanocomposite fertilizers to include a broad variety of macro- and micronutrients in suitable properties [77]. Generally, nanofertilizers are created by encapsulating nutrients with nanomaterials. Nanomaterials are generated with physical (top-down) and chemical (bottom-up), afterward in nano-porous materials are encapsulated, coated, or distributed as nano-sized particles or emulsions for cationic nutrients (NH41, K1, Ca21, Mg21), or anionic nutria after surface modification (NO32, PO42) [78]. By effective fertilizer use, irrigation, and the use of better crops, farm productivity can be improved by 35%40%. Nanoformulated fertilizers have been strongly observed to improve the production of crops with considerable potential. In combination with fertilizers, for example, the usage of NPs will raise the crop yields of rice (10.29%), spring maize (10.93%), soybean (16.74%), winter wheat (28.81%), and vegetables (12.34%19.76%) [79]. Abdel-Aziz et al. [74] have shown that the application of chitosanNPK fertilizer, as contrasted with the control yield variable, greatly raises the harvest index, grain index, and mobilization index of defined wheat yield variables. The key nutrient portal for plants with a nanoscale pores are nanomaterials that stimulate a variety of essential aspects


Applications of Computational Intelligence in Multi-Disciplinary Research

of plant biology [22,72]. The usage of nanofertilizers thus may increase the plant’s nutrient absorption through these pores or by generating new pores and by utilizing endocytosis through ion channels, promoting complexation with molecular transporters or root exudates [80]. It has also been established from a large number of studies that a significant number of nutrient ions shrink and gradually and continuously be consumed over a prolonged period due to the scale of the nanomaterials installations, which raise the surface mass ratio of particles [81,82]. Thus fertilizer nanoformulation maintains healthy crop nutrition during the growth process, which consequently increases agricultural production. It is worth noting that the improved quality of the commodity could enhance the productivity of the farmers. The goal is to promote human health through scientific advances. Furthermore, plant sciences intend to reestablish the natural genomic diversity of various plants and to develop technologies for reducing the use of fertilizer without sacrificing crop production and environmental conditions [83]. Through line, a new word is used for productive cultivation: “power failure fertilizer.” These fertilizers have been engineered to minimize nonpoint emissions of agriculture inputs by creating a nanonetwork by self-assembling while in contact with soil water. The trapped fertilizer nutrients are introduced into the field network by means of hydrogen bonds, surface stress, molecular pressure, or viscous power. Their spatial aspect is therefore extended such that soil filtration can be conveniently blocked and soil fixed across the crop roots, enabling the plants’ nutrient absorption to fulfill the demands during the growth process. In this latest method, for example, the nitrogen transport levels into the atmosphere have been developing effectively [84]. Liu et al. [85] have also found that the usage of controlled loss fertilizers not only decreases the volume of nitrogen leakage and the lack of leaching by 21.6% and 24.5%, respectively, but also raises soil residual mineral nitrogen by a rate of 9.8%, while the output of conventional fertilizers rises by a rate of 5.5%. While many research papers on this subject have been written, there is still inadequate knowledge and analysis on the broader scope. Therefore work into modern and innovative methods will be pursued, which can monitor the movement of both macro- and micronutrients in micromatrix contaminants.

9.6 Germination, field production, and efficiency enhancement of seed nanomaterials Nanoscience is a new scientific innovation platform that develops approaches to a variety of cheap nanotech applications for improved seed germination, plant growth, development, and environmental acclimation. Seed germination is a critical process of the plant cycle that promotes seed production, survival, and population dynamics. However, specific parameters, including environmental conditions, genetic traits, quality of moisture, and soil productivity, influence seed germination [86]. In this regard, various studies have shown the positive results of the application of nanomaterials on germination and plant growth and production. In several crop species such as Tomato, corn, soybeans, wheat, peanut, and garlic, the application of multiwall carbon nanotubes (MWCNTs) has a beneficial effect on the germination of the seed [87,88]. Similarly, the usage of nano-SiO2, nano-TiO2, and Zeolite promote seed germination in crop plants favorably [89]. Disfani et al. [90] have discovered the important potentials of Fe/SiO2 nanomaterials for enhancing the seed germination of barley and maize. Given extensive work on the beneficial impact of nanomaterials on germination, the fundamental mechanisms for inducing germination are still unknown. A few studies have shown that nanomaterials can penetrate the seed coat and increase the absorption and use the capacity of water which activates the enzyme system and eventually improves germination and planting production [90,91]. Nevertheless, it remains unclear to a large degree the process of absorption of nanomaterials mediated water in the crop. Besides germination, the growth and production of crops with improvement in the quality of several crop species, including peanut, soybean, moonbeam, wheat, onion, spinach, tomato, and mustard, are recorded in addition to nanomaterials such as ZnO, TiO2, MWCNT, FeO, ZnFe2O4/CuO and hydroxyfullerenes [41]. For instance, fullerenes in carbon nanomaterials, as OH-functional fullerenes, often have positive effects on plant development. Gao et al. [92] demonstrated that complements had improved hypocotyls development by inducing cell divisions in Arabidopsis. Furthermore, seed packings with fullerol have been found not only to improve the fruit amount, fruit size, and final development by upto 128% but also to raise bioactive compounds including cucurbitacin B, lycopene, quarantine, and inulin in bitter melon fruits (Momordica charantia) [93]. Yousefzadeh and Sabaghnia found that upon the application of nano-iron engraving, not only is the agronomic traits of Dracocephalum moldavica increased, but even essential oil quality is enhanced [94]. Similarly, the foliar application of nano-zinc-bore fertilizers has demonstrated a boost in fruit yield and efficiency, including a rise of 4.4%7.6% in total suspended solids (TSS), a decrease of 9.5%29.1% in terrible acidity, decrease of 20.6%46.1% in maturity index, and a rise of 0.280.62 pH units in pomegranate juice pH without any impact on actual pheasant fruit chips [58]. These results demonstrated the potential for increasing crop

Advances of nanotechnology in plant development and crop protection Chapter | 9


yield and product consistency in nanomaterials. Although the precise process for fostering plant growth and enriched content is not obvious, it can at least be partially understood that nanomaterials’ capacity for the uptake of more nutrients and water, in effect, helps to strengthen root systems with increased involvement in enzymes [95,96]. In addition, nutrient studies carried out in water and soil on slow/controlled release or regulation of nanofertilizer losses have verified that the long-term availability of all doped plant nutrients during the maximum cultivation period is crucial in promoting germination, development, flowering, and fruiting [77].

9.7 Plant sensory systems and responses to radical climate change influences nanomaterials Food health has become a problem for the increasing population because of the scarce supplies available in the world as a consequence of radical climate change. Climate change improvement applies overtime to shift in climate equilibrium, such as weather, water shortage, wind, salinity, alkalinity, and marine hazardous metal emissions. The key issue is therefore ensuring that plants are easily adjusted to the environmental pressures without affecting already-fragile habitats [5]. A multipronged approach is required: for instance, triggering an enzyme network of plants, controlling the hormones, voicing stress genes, controlling the absorption of toxic metals, and preventing tension from water deficiency or flood by cutting the period of plant life. Several different researchers have made efforts to establish sustainable agriculture system technologies and activities by preventing adverse environmental consequences [97]. The development in nanomaterials engineering indicates the usage of nanofertilizers that increase cultivation in adverse environments in established environments. Salinity overload restricts crop production significantly in around 23% of the world’s agricultural lands [98]. The application of nano-SiO2 in tomatoes and squash plants with NaCl stress was, however, recorded to boost seed germination and increase plant fresh weight, dry weight, and chlorophyll content [99,100]. Also, Torabian et al. [101] reported a favorable reaction to salinity stress resistance in sunflower culture varieties as well as foliar sprays of NPs, iron sulfate (FeSO4), etc. They have noted that nano-FeSO4 applications not only have increased leaf area, sunlight dry weight, assimilation rate of net carbon dioxide (CO2), and the substomatal concentration of CO2, but they have also decreased a significant amount of sodium (Na) in leaves, as well as increased chlorophyll density, maximum photosystem efficiency II (Fv/Fm), and iron/Fe material. It has also recently been investigated that silicon NPs (SiNPs) can effectively ease the UV-B tension induced by wheat [102]. A huge attempt to investigate the use of nanomaterials was made by Abdel-Aziz et al. [74]. The life cycle of wheat plants with the application of nanofertilizer using was seen to be 23.5% shorter (130 days vs 170 days) than for traditional fertilizerapplied plants for the development of yields as of the seed date. The use of nanofertilizers demonstrates that such acceleration of plant growth and productivity may be an effective resource for agricultural practices in areas especially susceptible to drought, or even in areas sensitive to rapid flooding, in which early crop maturity is an essential factor for sustainable crop development. Furthermore, the detoxification or remedying of toxic contaminants such as heavy metals has been shown through nanometers. For instance, Wang et al. [103] has shown that using foliar nano-Si at 2.5 mM enhances Cd’s stress resistance dramatically in rice plants by controlling Cd’s accumulation. The same group demonstrated in another analysis that nano-Si also acts against Pb, Cu, and Zn with Cd. Nano-Si fertilizers appear likely to have a benefit in raising heavy metal deposition over conventional fertilizers [104]. The fundamental pathways remain widely unknown, following various experiments in nanomaterials-induced plant growth and stress resistance promotions. The effect on crop growth under unfavorable conditions of nanomaterials can be clarified, at least in part, by the enhanced activities of the enzyme mechanism [105]. The aggregation of free proline and amino acids, carbohydrates and water concentrations, and the operation of antioxidant enzymes such as superoxide, dismutase, catalase, peroxidase, nitrate reductase, and glutathione reductase, for example, improve the use of nanomaterials such as nano-SiO2 or nano-ZnO [106]. Nanomaterials can also control the expression of stress genes. Microarray research, for example, has shown that some genes are either upregulated or decreased upon the application of AgNPs in Arabidopsis [107]. The reaction of metals and oxidative stress (cation exchanger, cytochrome P450-dependent oxidase, superoxide dismutase, and peroxidase) are major factors in the upregulated genes. Regulated genes, however, are linked to pathogens and hormonal triggers, including systemic tolerance, ethylene signaling, and auxin-mediated genes involved in growth and organ measurements [107]. These reactions to nanomaterials are specifically interested in plant stress defense [98]. The plants’ reaction to nanofertilizers depends on the plant type, its development, and the quality of the nanomaterials that are used [74]. Therefore more research is required before the technology hits the farmgate to establish signal cascades and genes that are controlled by the nanomaterials of different plants.



Applications of Computational Intelligence in Multi-Disciplinary Research

Nanosensors and nanomaterials: perturbation detection and control

Definition of work on sustainable agricultural production is nanomaterials engineering. The usage of nanomaterials in precision agriculture lowers prices, productivity, and environmental safety. The production of nanosensors for measuring and tracking crop growth and soil, nutrient deficit, contamination, diseases, and environmental entry of agrochemicals will lead to good consistency, and overall health of the land, plant heating, goods, and environmentally friendly agriculture [9]. The current condition of the environment of biological species have a meaning. Integration in biology with nanomaterials into sensors therefore provides a wider prospect for raising the deficiency’s specificity, responsiveness, and rapid response [36]. For example, Global positioning system (GPS) based on nanosensors is used during the growing season to track fields in real time. These wireless nanosensor networks track the regulated release process by using nanoscale carriers utilizing wireless signals around the fields cultivated. This ensures that crop growth is tracked in real time and gives high-quality results, which provide incentives for best management practices by avoiding a dose of agricultural inputs where necessary [108]. The automation of irrigation systems with sensor technologies will optimize the performance of water use. Nanosensors measure in real time groundwater tension coupled with independent irrigation regulation in a water-restriction situation [109]. Likewise, fast and precise insect or pathogen identification will aid in protecting the seed from an infestation during the early application of pesticides or fertilizers. Afsharinejad et al. [14] built wireless nanosensors to detect insect attacks in this relation. These sensors differentiate the released organic matter from the insect forms in a variety of host plant species. Sing et al. [110] have shown the efficacy of an immunosensor focused on nanogold in the identification of carnal bunt disease in wheat plants. In addition, the production of bionic plants in the field of nanobiotechnology through injecting NPs in the cells and chloroplasts of living plants to sensor or picture artifacts in their atmosphere and interacting as infrared instruments or even plant self-control as light sources have tremendous potential to generate precision farming [106,111]. For example, Giraldo et al. [112] recorded the incorporation of single walled carbon nanotubes (SWCNTs) in in vivo conditions, which was found to increase the electron transfer rate of light-adapted chloroplasts by 49%. They also showed that SWCNTs also contributed to the lightharvesting capabilities of close infrared fluorescence through suppression of the output in the chloroplast of reactive oxygen species and that influences the plant-sensing cycle. Advances in nanobionic crop enhancement methods and environment control thus create a new window in the analysis of nanomaterial hybrids from practical plants [112,113]. Fang et al. [114] have recently established a simple and nonlabeling double-functional method for the fluorescence identification of acetylcholinesterase (AChE) behavior and the poisoning of Cd21 in real-water samples, focused on upconversion NPs for mark free. To track and measure a few toxins, e.g., contaminants, substantial improvement has already been made. For example, the biosensor photosystem II can connect many pesticide groups and also track chemicals. These nanosensors provide a simple and low-carbon technology to identify different pesticides with a range of organic contaminants before disposal in the agricultural setting [22]. Certainly, an important method to ensure sustainable production by tracking crop and soil quality is the clever use of nanosensors in agriculture. However, given the vast research records in this area, the persuasive uses of nanosensors, especially in field trials, are surprisingly insufficient.


Pesticide-based plant safety nanomaterials

Nanotechnology assistance in plant defense goods has grown dramatically to improve crop yield. In addition, the use of broad and excess fungicides, herbicides, and insecticides is protected by standard plant safety methods. More than 90% of the distributed pesticides are lost in the atmosphere or fail to enter destinations required for efficient pesticide management [115]. This not only raises food production rates but also allows natural resources to become degraded. To guarantee improved safety of vegetables from pest invasion and eventual crop destruction, we will remember that there is an acute material present in the minimum effective concentration of formulation in the target sites. A very cerebral area of agricultural research has been the creation of a new plant defense formulation in this relation. Nanoformulation and/or the encapsulation of pesticides that revolutionized the plant defense field are among those technologies. A very limited number of nanoformulation particles are present in pesticide nanoformulation, although the valuable pesticide properties are also present in other designed nanostructuring [116]. Pesticide nanoencapsulation is covering using certain materials of varying scales with pesticides at the nano-sized regions, in which encapsulated materials are referred to as the internal layer of core substance (pesticides) and the capsule content is referred to as the external process, that is, nanomaterials that are covered [115]. In addition, the usage of agrochemicals to prevent soil degradation and nontarget biodiversity is a minimum criterion of sustainability in agriculture. The reduced application of chemicals often reduces the risks associated with agriculture. The global annual agricultural crop losses from plant diseases, insect pests, and weeds are projected at $2 trillion, and

Advances of nanotechnology in plant development and crop protection Chapter | 9


FIGURE 9.3 Uses of nanoparticles in plant protection.

fungicide formulations alone cost more than $600 million in the United States in terms of pathogen control efforts [74,110]. The use of NPs is documented to be an effective alternative in these circumstances to directly eradicate pathogenic infection and disease, which contributes to increased cultivation growth and yields [96]. Hallo sites, for example, are a form of clay nanotubing used for low-cost pesticide carriers in farming. Such nanopipes not only display extended activated ingredient (AI) release times but also provide a stronger interaction with a minimal environmental effect. For disease and weed control, the possible use of engineered nanomaterials is established for agriculture (Fig. 9.3). Inorganic NPs including proteins, Cu, SiO2, and TiO2 perform important roles, defending against microbes and bacterial disease in different plant health arenas [117]. For example, ZnO NPs have recently been shown to provide effective growth control of Fusarium graminearum, Penicillium expansum, Alternaria alternate, F. oxysporum, Rhizopus stolonifer, Mucor plumbeus, and A. flavus as well as pathogenic bacteria Pseudomonas aeruginosa [118]. Compared to currently available nonnano Cu formulations, the nano-Cu treatment was found to be more successful against phytophthora infesting in tomatoes [119]. In addition, it was identified that SiO2 and TiO2 were promising in specifically suppressing crop conditions by using antimicrobial activity. NPs inhibit the growth of fungal conidia and conidiophores that allow fungal hyphae to eventually die. Weeds are often seen as a significant challenge to the world’s farming industry because they contend with crops for their resources, water, and energy. Nonetheless, the approach is environmentally safe with the application of modified nanomaterials containing herbicides. Sharifi-Rad et al., for example [120], demonstrated a substantial reduction in germination, rooting and shot distances, fresh and dry weights, and photosynthetic pigments with complete protein in weeds exposed to SiO2 NPs. Similarly, Kumar et al. [121] have shown that herbicide pectin (polysaccharide)-loaded NPs are cytotoxic to both laboratory and infield plants and that only very small amounts of AI in comparison with the market herbicide is needed. Commercial herbicides typically monitor or destroy the top sections of the weeds without damaging the bottom sections including rhizomes or tubers. As a consequence, weeds rise again, but nanoherbicides prohibit weeds from regenerating. There is also an immense reach of sustainable farming production of plagues, fungicides, and herbicides for nanomaterials.


Nanotechnology in pesticides and fertilizers

Nowadays, sustainable agriculture is needed. It may be understood as a good approach to the ecosystem in the long run. Practices that can cause long-term damage to soil include excessive tilling of the soil, which leads to erosion and


Applications of Computational Intelligence in Multi-Disciplinary Research

irrigation without needed drainage. This is to satisfy human beings’ food, animal feed, and fiber needs. The recent developments in agriculture cover the applications of NPs for more effective and safe use of chemicals for plants. The effects of different NPs on plant growth and phytotoxicity were reported by several researchers including magnetite (Fe3O4) NPs and plant growth [111], alumina, zinc, and zinc oxide on seed germination and root growth of five higher plant species; radish, rape, lettuce, corn, and cucumber, silver NPs and seedling growth in wheat [122]; sulfur NPs on tomato [123]; zinc oxide in mung bean; NPs of AlO, CuO, FeO, MnO, NiO2, and ZnO [121]. The crop yield and quality of products can be affected by Zn deficiency. The development of insecticide resistance in pest insects has been an increasing problem for agriculture and public health. Therefore various techniques and routes for the synthesis of MgONPs have been reported [124]. MgOH was synthesized with green methods using nontoxic neem leaves extract [125], citrus lemon leaves extract, and acacia gum [126].


Control of plant pests

Disease can be reduced to some extent with the use of resistant cultivars and chemicals. However, the occurrence and development of new pathogenic races is a continuing problem, and the use of chemicals is expensive and not always effective. In recent years, the use of nanomaterials has been considered as an alternative solution to control plant pathogens. Ghidan et al. [127] have synthesized NPs of magnesium oxide (MgO) and tested the effect of different concentrations on green peach aphid under greenhouse conditions [128131]. The synthesis of nanomaterials of copper oxide (CuO), zinc oxide (ZnO), magnesium hydroxide (MgOH), and magnesium oxide (MgO) has been carried out successfully by using aqueous extracts of Punica granatum peels, Olea europaea leaves, and Chamaemelum nobile flowers [132]. Nanomaterials prepared by using eco-friendly and green methods may increase agriculture potential for improving the fertilization process, plant growth, and pesticides. In addition, this technology minimizes the number of harmful chemicals released that pollute the environment [133]. Moreover, control of such a pest is becoming increasingly difficult, because of the overproduction of resistance in aphid individuals when using chemical insecticides such as carbonates, organophosphates, and pyrethroids [134]. Nanomaterials such as copper oxide (CuONPs), zinc oxide (ZnONPs), magnesium hydroxide (MgOHNPs), and magnesium oxide (MgONPs) were synthesized by using different physical and chemical methods [135]. Copper oxide NPs (CuONPs) are synthesized through different methods [136] such as precipitation [137] and chemical reduction [138]. Many plant aqueous extracts have been reported such as citrus lemon juice [139] and carob leaves [140]. Applications led many researchers to develop different ways to synthesis ZnONPs, including the chemical route [141], precipitation method [142], hydrolysis in polar organic solvents [143], and microwave synthesis [144].


Concluding remarks

Nanotechnology has been used in agriculture to improve agricultural productivity by enhancing the efficiency of agricultural processes, as seen in Fig. 9.4. In the sense of sustainable agriculture, the advent of engineered nanomaterials and their operations have radically revolutionized world agriculture through reporting, exponential expansion, and enormity to satisfy the growing demand for food. Environmental security from emissions is the main trade priority in sustainable agriculture, and nanomaterials ensure that the plant output resources are properly handled and conserved. The ability of nanomaterials is fostering a modern and ecological transition. Our awareness of the absorption, acceptable limit, and ecotoxicity of various nanomaterials, however, is still very limited [111]. More work is therefore urgently needed to uncover the actions, fate, and relationship of changed agricultural inputs with biomacromolecules in the living system and environment. Nanotechnology applications are currently being researched, tested, and, in some cases, already applied across the entire spectrum of food technology, from agriculture to food processing, packaging, and food supplements. The development of insecticide resistance in pest insects has been an increasing problem for agriculture and public health. Agricultural practices usually include the systematic application of a wide array of active compounds at variable dosages and frequencies.

Consent for publication Not applicable.

Advances of nanotechnology in plant development and crop protection Chapter | 9


FIGURE 9.4 Simplified overview of potential applications of nanomaterials in sustainable agriculture production.

Conflict of interest The authors declare no conflict of interest, financial or otherwise.

References [1] R. Prasad, A. Bhattacharyya, Q.D. Nguyen, Nanotechnology in sustainable agriculture: recent developments, challenges, and perspectives, Frontiers in Microbiology (2017). [2] M. Lv, et al., Engineering nanomaterials-based biosensors for food safety detection, Biosensors and Bioelectronics (2018). [3] L. Lipper, et al., Climate-smart agriculture for food security, Nature Climate Change (2014). [4] Y.W. Chen, H.V. Lee, J.C. Juan, S.M. Phang, Production of new cellulose nanomaterial from red algae marine biomass Gelidium elegans, Carbohydrate Polymers (2016). [5] S.J. Vermeulen, et al., Options for support to agriculture and food security under climate change, Environmental Science and Policy (2012). [6] M. Kitching, M. Ramani, E. Marsili, Fungal biosynthesis of gold nanoparticles: mechanism and scale up, Microbial Biotechnology (2015). [7] A. Gogos, K. Knauer, T.D. Bucheli., Nanomaterials in plant protection and fertilization: current state, foreseen applications, and research priorities, Journal of Agricultural and Food Chemistry (2012). [8] D.H. Kim, J. Gopal, I. Sivanesan, Nanomaterials in plant tissue culture: the disclosed and undisclosed, RSC Advances (2017). [9] J.S. Duhan, et al., Nanotechnology: the new perspective in precision agriculture, Biotechnology Reports (2017). [10] W. Zhang, S. Ronca, E. Mele, Electrospun nanofibres containing antimicrobial plant extracts, Nanomaterials (2017). [11] Y. Ran, Z. Liang, C. Gao, Current and future editing reagent delivery systems for plant genome editing, Science China Life Sciences (2017). [12] J.B. Miller, et al., Non-viral CRISPR/cas gene editing in vitro and in vivo enabled by synthetic nanoparticle co-delivery of Cas9 MRNA and SgRNA, Angewandte Chemie - International Edition (2017). [13] S.Y. Kwak, M.H. Wong, et al., Nanosensor technology applied to living plant systems, Annual Review of Analytical Chemistry (2017). [14] A. Afsharinejad, A. Davy, B. Jennings, C. Brennan, Performance analysis of plant monitoring nanosensor networks at THz frequencies, IEEE Internet of Things Journal (2016).


[15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51]

Applications of Computational Intelligence in Multi-Disciplinary Research

J.C. Love, et al., Self-assembled monolayers of thiolates on metals as a form of nanotechnology, Chemical Reviews (2005). K.B. Narayanan, N. Sakthivel, Biological synthesis of metal nanoparticles by microbes, Advances in Colloid and Interface Science (2010). T.J. Park, K.G. Lee, S.Y. Lee, Advances in microbial biosynthesis of metal nanoparticles, Applied Microbiology and Biotechnology (2016). S. Iravani, Green synthesis of metal nanoparticles using plants, Green Chemistry (2011). A.O. Adesemoye, W.K. Joseph, Plant-microbes interactions in enhanced fertilizer-use efficiency, Applied Microbiology and Biotechnology (2009). V.V. Makarov, et al., Green’ nanotechnologies: synthesis of metal nanoparticles using plants, Acta Naturae (2014). V. Ghormade, V.D. Mukund, M.P. Kishore, Perspectives for nano-biotechnology enabled protection and nutrition of plants, Biotechnology Advances (2011). D.P. Singh, H.B. Singh, R. Prabha, Microbial Inoculants in Sustainable Agricultural Productivity: vol. 2, New Delhi: Springer, 2016. R. Sanghi, et al., Advances and applications through fungal nanobiotechnology, Applied Microbiology and Biotechnology (2016). J.N. Tiwari, N.T. Rajanish, K.S. Kim, Zero-dimensional, one-dimensional, two-dimensional and three-dimensional nanostructured materials for advanced electrochemical energy devices, Progress in Materials Science (2012). B. Mendoza-Sa´nchez, Y. Gogotsi, Synthesis of two-dimensional materials for capacitive energy storage, Advanced Materials (2016). P.V. Kamat, Quantum dot solar cells. Semiconductor nanocrystals as light harvesters, Journal of Physical Chemistry C (2008). R.A.J. Janssen, W.S. Jan, Red, green, and blue quantum dot LEDs with solution processable ZnO nanocrystal electron injection layers, Journal of Materials Chemistry (2008). W. Lee, et al., TiO2 nanotubes with a ZnO thin energy barrier for improved current efficiency of CdSe quantum-dot-sensitized solar cells, Nanotechnology (2009). Y. Xia, et al., One-dimensional nanostructures: synthesis, characterization, and applications, Advanced Materials (2003). R.S. Devan, R.A. Patil, J.H. Lin, Y.R. Ma, One-dimensional metal-oxide nanostructures: recent developments in synthesis, characterization, and applications, Advanced Functional Materials (2012). M. Ge, et al., A review of one-dimensional TiO2 nanostructured materials for environmental and energy applications, Journal of Materials Chemistry A (2016). D. Pradhan, S. Sindhwani, K.T. Leung, Parametric study on dimensional control of ZnO nanowalls and nanowires by electrochemical deposition, Nanoscale Research Letters (2010). Q. Wei, et al., Porous one-dimensional nanomaterials: design, fabrication and applications in electrochemical energy storage, Advanced Materials (2017). J.J. Lv, et al., A simple one-pot strategy to platinum-palladium@palladium core-shell nanostructures with high electrocatalytic activity, Journal of Power Sources (2014). N. Sozer, L.K. Jozef, Nanotechnology and its applications in the food sector, Trends in Biotechnology (2009). A. Dubey, R.M. Damodhara. Nanofertilisers, Nanopesticides, Nanosensors of Pest and Nanotoxicity in Agriculture, 2016. J. Jampı´lek, K. Kra´L’Ova´, Application of nanotechnology in agriculture and food industry, its prospects and risks, Ecological Chemistry and Engineering S (2015). A. Narayanan, P. Sharma, B.M. Moudgil, Applications of engineered particulate systems in agriculture and food industry, KONA Powder and Particle Journal (2012). H. Yang, et al., Effects of ditch-buried straw return on water percolation, nitrogen leaching and crop yields in a rice-wheat rotation system, Journal of the Science of Food and Agriculture (2016). R. Nair, et al., Nanoparticulate material delivery to plants, Plant Science (2010). S. Sabir, M. Arshad, S.K. Chaudhari, Zinc oxide nanoparticles for revolutionizing agriculture: synthesis and applications, Scientific World Journal (2014). T.R. Shojaei, et al., Applications of nanotechnology and carbon nanoparticles in agriculture, Synthesis, Technology and Applications of Carbon Nanomaterials (2018). J. Costa, C. Novillo, Regulatory approvals of GM plants (insect resistant) in European agriculture: perspectives from industry, in: ArthropodPlant Interactions: Novel Insights and Approaches for IPM, 2012. G. Pandey, Challenges and future prospects of agri-nanotechnology for sustainable agriculture in India, Environmental Technology and Innovation (2018). M. Nuruzzaman, M.M. Rahman, Y. Liu, R. Naidu, Nanoencapsulation, nano-guard for pesticides: a new window for safe application, Journal of Agricultural and Food Chemistry (2016). M. Zhang, et al., Slow-release fertilizer encapsulated by graphene oxide films, Chemical Engineering Journal (2014). R. Singh, K.P. Singh, S.P. Singh Nanotechnology and its applications in agriculture, in: Engineering Practices for Agricultural Production and Water Conservation: An Interdisciplinary Approach, 2017. F. Torney, G.T. Brian, V.S.Y. Lin, K. Wang, Mesoporous silica nanoparticles deliver DNA and chemicals into plants, Nature Nanotechnology (2007). A.B. Britt, Molecular genetics of DNA repair in higher plants, Trends in Plant Science (1999). L. Klerkx, E. Jakku, P. Labarthe, A review of social science on digital agriculture, smart farming and agriculture 4.0: new contributions and a future research agenda, NJAS—Wageningen Journal of Life Sciences (2019). P.S. Vijayakumar, O.U. Abhilash, B.M. Khan, B.L.V. Prasad, Nanogold-loaded sharp-edged carbon bullets as plant-gene carriers, Advanced Functional Materials (2010).

Advances of nanotechnology in plant development and crop protection Chapter | 9


[52] S.C. Bhatia, S.C. Bhatia, Nanotechnology in agriculture and food industryIn Food Biotechnology (2019). [53] X. Zhang, J. Zhang, K.Y. Zhu, Chitosan/double-stranded RNA nanoparticle-mediated RNA interference to silence chitin synthase genes through larval feeding in the African malaria mosquito (Anopheles gambiae), Insect Molecular Biology (2010). [54] S. Agarwal, U. Lenka, An exploratory study on the development of women entrepreneurs: Indian cases, Journal of Research in Marketing and Entrepreneurship (2016). [55] R. Mout, et al., Direct cytosolic delivery of CRISPR/Cas9-ribonucleoprotein for efficient gene editing, ACS Nano (2017). [56] C. Li, Y. Li, Y. Li, G. Fu, Cultivation techniques and nutrient management strategies to improve productivity of rain-fed maize in semi-arid regions, Agricultural Water Management (2018). [57] M. Huang, et al., Soil testing at harvest to enhance productivity and reduce nitrate residues in dryland wheat production, Field Crops Research (2017). [58] S. Davarpanah, et al., Effects of Foliar applications of zinc and boron nano-fertilizers on pomegranate (Punica granatum Cv. Ardestani) Fruit Yield and Quality, Scientia Horticulturae (2016). [59] R.V. Mathew, N. Panchanatham, An exploratory study on the development of women entrepreneurs: Indian cases, Journal of Research in Marketing and Entrepreneurship 18 (2) (2016) 232247. [60] M.D. Nuruzzaman, M.M Rahman, Y. Liu, R. Naidu, et al., Nanoencapsulation, nano-guard for pesticides: a new window for safe application, Journal of Agricultural and Food Chemistry 64 (7) (2003) 14471483. [61] P.K. Archana, S.N. Gawade, Studies on nanoparticle induced nutrient use eficiency of fertilizer and crop productivity, Green Chemistry & Technology Letters (2016). [62] S.X. Li, Z.H. Wang, Y.F. Miao, S.Q. Li, Soil organic nitrogen and its contribution to crop production, Journal of Integrative Agriculture (2014). [63] M.I. Rashid, et al., Bacteria and fungi can contribute to nutrients bioavailability and aggregate formation in degraded soils, Microbiological Research (2016). [64] M.G. Bublitz, et al., Food access for all: empowering innovative local infrastructure, Journal of Business Research (2019). [65] K.S. Subramanian, A. Manikandan, M. Thirunavukkarasu, C.S. Rahale, Nano-fertilizers for balanced crop nutritionIn Nanotechnologies in Food and Agriculture (2015). [66] M. Rai, C. Ribeiro, L. Mattoso, N. Duran, Nanotechnologies in Food and Agriculture (2015). [67] A. Sabir, et al., Vine growth, yield, berry quality attributes and leaf nutrient content of grapevines as influenced by seaweed extract (Ascophyllum nodosum) and nanosize fertilizer pulverizations, Scientia Horticulturae (2014). [68] Fertilizer markets respond to agricultural demand growth, International Bulk Journal (1997). [69] H. Godfray, J. Charles, et al., Food security: the challenge of feeding 9 billion people, Science (2010). [70] P. Shukla, et al., Nanotechnology in sustainable agriculture: studies from seed priming to post-harvest management, Nanotechnology for Environmental Engineering (2019). [71] Z.H. Wang, Y.F. Miao, S.X. Li, Effect of ammonium and nitrate nitrogen fertilizers on wheat yield in relation to accumulated nitrate at different depths of soil in drylands of China, Field Crops Research (2015). [72] Y.F. Miao, Z.H. Wang, S.X. Li, Relation of nitrate N accumulation in dryland soil with wheat response to N fertilizer, Field Crops Research (2015). [73] M. Eswaran, A. Kotwal, The role of agriculture in development, in: Understanding Poverty, 2006. [74] H.M.M. Abdel-Aziz, M.N.A. Hasaneen, A.M. Omer, Nano chitosan-NPK fertilizer enhances the growth and productivity of wheat plants grown in sandy soil, Spanish Journal of Agricultural Research (2016). [75] G. Milla´n, et al., Use of clinoptilolite as a carrier for nitrogen fertilizers in soils of the pampean regions of Argentina, Ciencia e Investigacion Agraria (2008). [76] M. Khan, A. Saghir, Zaidi, P.A. Wani, Role of phosphate solubilizing microorganisms in sustainable agriculture, in: Microbes in Sustainable Agriculture, 2009. [77] A. Lateef, et al., Synthesis and characterization of zeolite based nano-composite: an environment friendly slow release fertilizer, Microporous and Mesoporous Materials (2016). [78] V.D. Fageria, Nutrient interactions in crop plants, Journal of Plant Nutrition (2001). ˇ c, A. Tuˇsek, B. Zeli´c, Application of microreactors in medicine and biomedicine, Journal of Applied Biomedicine (2012). [79] A. Sali´ [80] E. Mastronardi, et al. Strategic role of nanotechnology in fertilizers: potential and limitations, in: Nanotechnologies in Food and Agriculture, 2015. [81] C.M. Monreal, et al., Nanotechnologies for increasing the crop use efficiency of fertilizer-micronutrients, Biology and Fertility of Soils (2016). [82] J.F. Angus, A.F. Van Herwaarden, Increasing water use and water use efficiency in dryland wheat, Agronomy Journal (2001). [83] C.K. Das, et al., Nano-iron pyrite seed dressing: a sustainable intervention to reduce fertilizer consumption in vegetable (beetroot, carrot), spice (fenugreek), fodder (alfalfa), and oilseed (mustard, sesamum) crops, Nanotechnology for Environmental Engineering (2016). [84] D. Cai, et al., Controlling nitrogen migration through micro-nano networks, Scientific Reports (2015). [85] A. Joshi, et al., Multi-walled carbon nanotubes applied through seed-priming influence early germination, root hair, growth and yield of bread wheat (Triticum aestivum L.), Journal of the Science of Food and Agriculture (2018). [86] K.M. Manjaiah, et al. Clay minerals and zeolites for environmentally sustainable agriculture, in: Modified Clay and Zeolite Nanocomposite Materials, 2019.


Applications of Computational Intelligence in Multi-Disciplinary Research

[87] M. Khodakovskaya, et al., Carbon nanotubes are able to penetrate plant seed coat and dramatically affect seed germination and plant growth, ACS Nano (2009). [88] A. Srivastava, D.P. Rao, Enhancement of plant growth using multiwalled carbon nanotubes enhancement of seed germination and plant growth of wheat, maize, peanut and garlic using multiwalled carbon nanotubes, Chemical Bulletin (2014). [89] L. Beesley, E. Moreno-Jime´nez, J.L. Gomez-Eyles, Effects of biochar and greenwaste compost amendments on mobility, bioavailability and toxicity of inorganic and organic contaminants in a multi-element polluted soil, Environmental Pollution (2010). [90] N. Disfani, A.M. Marjan, M.Z. Kassaee, A. Maghari, Effects of nano Fe/SiO2 fertilizers on germination and growth of barley and maize, Archives of Agronomy and Soil Science (2017). [91] S.I. Hou, Health education: theoretical concepts, effective strategies and core competencies, Health Promotion Practice (2014). [92] J. Gao, et al., Polyhydroxy fullerenes (fullerols or fullerenols): beneficial effects on growth and lifespan in diverse biological models, PLoS One (2011). [93] C. Kole, et al., Nanobiotechnology can boost crop production and quality: first evidence from increased plant biomass, fruit yield and phytomedicine content in bitter melon (Momordica charantia), BMC Biotechnology (2013). [94] S. Yousefzadeh, N. Sabaghnia, Nano-iron fertilizer effects on some plant traits of dragonhead (Dracocephalum Moldavica L.) under different sowing densities, Acta Agriculturae Slovenica (2016). [95] A.P. Ramos, A.E.C. Marcos, C.B. Tovani, P. Ciancaglini, Biomedical applications of nanotechnology, Biophysical Reviews (2017). [96] A. Servin, et al., A review of the use of engineered nanomaterials to suppress plant disease and enhance crop yield, Journal of Nanoparticle Research (2015). [97] J. Pretty, Agricultural sustainability: concepts, principles and evidence, Philosophical Transactions of the Royal Society B: Biological Sciences (2008). [98] G. Onaga, K. Wydra, Advances in plant tolerance to biotic stressesIn Plant Genomics (2016). [99] The effect of N-Si on tomato seed germination under salinity levels, Journal of Biological and Environmental Sciences (2012). [100] M.H. Siddiqui, H.A.-W. Mohamed, Role of nano-SiO2 in germination of tomato (lycopersicum esculentum seeds mill.), Saudi Journal of Biological Sciences (2014). [101] S. Torabian, M. Zahedi, A.H. Khoshgoftar, Effects of foliar spray of nano-particles of FeSO4 on the growth and ion content of sunflower under saline condition, Journal of Plant Nutrition (2017). [102] D.K. Tripathi, et al., Silicon nanoparticles more effectively alleviated UV-B stress than silicon in wheat (Triticum aestivum) seedlings, Plant Physiology and Biochemistry (2017). [103] S. Wang, F. Wang, S. Gao, Foliar application with nano-silicon alleviates Cd toxicity in rice seedlings, Environmental Science and Pollution Research (2015). [104] S. Wang, F. Wang, S. Gao, X. Wang, Heavy metal accumulation in different rice cultivars as influenced by foliar application of nano-silicon, Water, Air, and Soil Pollution (2016). [105] S. Mousavi, M. Rezaei, Nanotechnology in agriculture and food production, Journal of Applied Environmental and Biological Sciences (2011). [106] X. Shi, C. Zhang, H. Wang, F. Zhang, Effect of Si on the distribution of Cd in rice seedlings, Plant and Soil (2005). [107] M. De Vittorio, L. Martiradonna, J. Assad, Nanotechnology and Neuroscience: Nano-Electronic, Photonic and Mechanical Neuronal Interfacing, 2014. [108] H. Chen, R. Yada, Nanotechnologies in agriculture: new tools for sustainable development, Trends in Food Science and Technology (2011). [109] L.F. Fraceto, et al., Nanotechnology in agriculture: which innovation potential does it have?”, Frontiers in Environmental Science (2016). [110] R. Singh, et al., Effect of weather parameters on karnal bunt disease in wheat in karnal region of Haryana, Journal of Agrometeorology (2010). [111] K. Shankramma, S. Yallappa, M.B. Shivanna, J. Manjanna, Fe2O3 magnetic nanoparticles to enhance S. Lycopersicum (tomato) plant growth and their biomineralization, Applied Nanoscience (Switzerland) (2016). [112] J.P. Giraldo, et al., Plant nanobionics approach to augment photosynthesis and biochemical sensing, Nature Materials (2014). [113] M.H. Wong, et al., Nitroaromatic detection and infrared communication from wild-type plants using plant nanobionics, Nature Materials (2017). [114] A. Fang, et al., Glutathione regulation-based dual-functional upconversion sensing-platform for acetylcholinesterase activity and cadmium ions, Biosensors and Bioelectronics (2017). [115] Nuruzzaman, M. D., et al. Nanoencapsulation, nano-guard for pesticides: a new window for safe application, Journal of agricultural and food chemistry 64.7 (2016): 14471483. [116] I. Ul Haq, S. Ijaz, Use of metallic nanoparticles and nanoformulations as nanofungicides for sustainable disease management in plants, 2019. [117] W.H. Elmer, C.W. Jason, The use of metallic oxide nanoparticles to enhance growth of tomatoes and eggplants in disease infested soil or soilless medium, Environmental Science: Nano (2016). [118] P. Vanathi, P. Rajiv, R. Sivaraj, Synthesis and characterization of Eichhornia-mediated copper oxide nanoparticles and assessing their antifungal activity against plant pathogens, Bulletin of Materials Science (2016). [119] K. Giannousi, I. Avramidis, C. Dendrinou-Samara, Synthesis, characterization and evaluation of copper based nanoparticles as agrochemicals against Phytophthora infestans, RSC Advances (2013). [120] S.A. Anjum, et al., Morphological, physiological and biochemical responses of plants to drought stress, African Journal of Agricultural Research (2011).

Advances of nanotechnology in plant development and crop protection Chapter | 9


[121] A.Y. Ghidan, et al., Comparison of different green synthesized nanomaterials on green peach aphid as aphicidal potential, Fresenius Environmental Bulletin (2018). [122] D. Lin, B. Xing, Phytotoxicity of nanoparticles: inhibition of seed germination and root growth, Environmental Pollution (2007). [123] N.M. Salem, et al., Sulfur nanoparticles improves root and shoot growth of tomato, Journal of Agricultural Science (2016). [124] A.Y. Ghidan, T.M. Al-Antary, N.M. Salem, A.M. Awwad, Facile green synthetic route to the zinc oxide (ZnONPs) nanoparticles: effect on green peach aphid and antibacterial activity, Journal of Agricultural Science (2017). [125] S.K. Moorthy, C.H. Ashok, K.V. Rao, C. Viswanathan, Synthesis and characterization of mgo nanoparticles by neem leaves through green method, in: Materials Today: Proceedings, 2015. [126] L. Kumari, et al., Synthesis, characterization and optical properties of Mg(OH)2 micro-/nanostructure and its conversion to MgO, Ceramics International (2009). [127] M. Berova, G. Karanatsidis, K. Sapundzhieva, V. Nikolova, Effect of organic fertilization on growth and yield of pepper plants (Capsicum Annuum L.), Folia Horticulturae (2013). [128] A. Grube, D. Donaldson, T. Kiely, L. Wu, Pesticides industry sales and usage: 2006 and 2007 market estimates, United States Environmental Protection Agency (2011). [129] O. Soemarwoto, G.R. Conway, The Javanese homegarden, Journal for Farming Systems Research-Extension (1991). [130] T.M. AL-Antary, F.K. Basil, Toxicity of four insecticides on longevity and fecundity of three populations of the green peach Aphid Myzus Persicae (Aphididae : Homoptera) for three generations, Jordan Journal of Agricultural Sciences (2013). [131] M.M. Al-Dabbas, A.M. Shaderma, T.M. Al-Antary, Effect of ozonation treatment on methomyl, oxamyl and carbosulfan residues removal in tomato juice, Life Science Journal (2014). [132] M.K.M. Lane, B.Z. Julie, Controlling metal oxide nanoparticle size and shape with supercritical fluid synthesis, Green Chemistry (2019). [133] S. Huang, et al., Nanotechnology in agriculture, livestock, and aquaculture in china. a review, Agronomy for Sustainable Development (2015). [134] H.M. Madanat, M.A.A. Tawfiq, M.H.A. Zarqa, Toxicity of six ethanol plant extracts against the green peach Aphid Myzus Persicae Sulzer (Homoptera: Aphididae), Fresenius Environmental Bulletin (2016). [135] T. Jiang, Y. Wang, D. Meng, M. Yu, Facile synthesis and photocatalytic performance of self-assembly CuO microspheres, Superlattices and Microstructures (2015). [136] A.Y. Ghidan, M.A.-A. Tawfiq, A.M. Awwad, Green synthesis of copper oxide nanoparticles using Punica Granatum peels extract: effect on green peach aphid, Environmental Nanotechnology, Monitoring and Management (2016). [137] M. Sahooli, S. Sabbaghi, R. Saboori, Synthesis and characterization of mono sized CuO nanoparticles, Materials Letters (2012). [138] A. Karthik, Dinesh, K. Geetha, Synthesis of copper precursor, copper and its oxide nanoparticles by green chemical reduction method and its antimicrobial activity, Journal of Applied Pharmaceutical Science (2013). [139] S. Mohan, Y. Singh, D.K. Verma, S.H. Hasan, Synthesis of CuO nanoparticles through green route using citrus limon juice and its application as nanosorbent for Cr(VI) remediation: process optimization with RSM and ANN-GA based model, Process Safety and Environmental Protection (2015). [140] Y. Abboud, et al., Biosynthesis, characterization and antimicrobial activity of copper oxide nanoparticles (CONPs) produced using brown alga extract (Bifurcaria bifurcata), Applied Nanoscience (Switzerland), 2014. [141] S.C. Singh, R. Gopal, Synthesis of colloidal zinc oxide nanoparticles by pulsed laser ablation in aqueous media, Physica E: Low-Dimensional Systems and Nanostructures (2008). [142] D. An, et al., Synthesis of porous ZnO structure for gas sensor and photocatalytic applications, Colloids and Surfaces A: Physicochemical and Engineering Aspects (2014). [143] S. Ehlert, T. Lunkenbein, J. Breu, S. Fo¨rster, Facile large-scale synthetic route to monodisperse ZnO nanocrystals, Colloids and Surfaces A: Physicochemical and Engineering Aspects (2014). [144] P. Sutradhar, M. Saha, Synthesis of zinc oxide nanoparticles using tea leaf extract and its application for solar cell, Bulletin of Materials Science (2015).

This page intentionally left blank

Chapter 10

A methodology for designing knowledgebased systems and applications Hien D. Nguyen1,2, Nhon V. Do3 and Vuong T. Pham4 1

University of Information Technology, Ho Chi Minh City, Vietnam, 2Vietnam National University, Ho Chi Minh City, Vietnam, 3Hong Bang

International University, Ho Chi Minh City, Vietnam, 4Sai Gon University, Ho Chi Minh City, Vietnam



Intellectual is the knowledge and the understanding of humans in different areas of expertise or certain knowledge, such as in general mathematics, or specialty subjects like plane geometry, solid geometry, and analytic geometry; knowledge about general physics or in a specialty subject such as alternating current/direct current and mechanics; and knowledge about sicknesses and diseases in the line of medicine. Knowledge often includes abstract compositions together with diversity in connections and complicated relations between the components [1,2]. The components of knowledge that are commonly seen are the concepts, the relations, the operators, the functions, the facts, and the rules; the conceptual knowledge component and their relations between them are the foundation of intellectual [3,4]. Knowledge also includes experiences and regulations that support the process of reasoning and solving problems relating to the understanding of a certain subject or an area of the subject [5,6]. Knowledge about solid geometry in high schools [7] gives us a detailed example of knowledge with complicated components. First, there are three basic concepts that are the base foundation: point, straight line, and plane. The concept theory becomes the base foundation to identify or construct different objects of geometry including rays, straight segments, angles, triangles and specialized triangles, quadrilaterals and specialized quadrilaterals, circumference and arc, pyramids, cylinders, cubes, and spheres. Between the concepts, there is one special relationship called “is a,” also known as a hierarchy onset of concepts. Besides the hierarchy of concepts, there are other concept relations including equal relation, comparative relation on values, parallel relation, perpendicular relation, and similar relation. Almost all of the relations are binary relations, and each relation has different functions/characteristics such as symmetry, asymmetry, and transitive relation. In solid geometry, there are functional knowledge components including functions such as the projection (perpendicular) of a point on a line or a plane; the intersection of two lines and that of two planes; the distance between a point and a line or a plane; and the distance between two parallel lines and that between two parallel planes. There are operators for vectors such as the addition of two vectors, the scalar product of two vectors, or multiplication of a vector by a number. Based on the above knowledge components of solid geometry, there are many rules (theorems, properties, formulas, etc.), such as theorems based on the elements of a triangle or a quadrilateral, Pythagorean theorem, the theorem of equal triangles, and the theorem of three perpendicular lines. A knowledge-based system (KBS) is a system that uses techniques of artificial intelligence in problem-solving processes to support human action, decision-making, and learning [1,5,6]. This system helps us in managing knowledge of different knowledge domains on the computer and helps in solving knowledge-related problems; more advanced is the system with the capability of reasoning or inference dealing with complex and abstract problems [3,8]. The system usually has an organization that stores knowledge for solving application requirements or problems; this knowledge organization is called the knowledge base (KB) of the system [1,2,9]. The KBS can be classified based on aspects with certain specific standards such as openness, knowledge representation, application objectives, and architecture, as well as operation of the system, or based on the application domain [1,9,10]. A KBS has a KB built with some initial “field knowledge,” and it is only able to use that knowledge during its operation or during its lifetime. An open KBS is capable of adding knowledge during the operation, often with a knowledge discovery Applications of Computational Intelligence in Multi-Disciplinary Research. DOI: © 2022 Elsevier Inc. All rights reserved.



Applications of Computational Intelligence in Multi-Disciplinary Research

module based on machine learning methods. The combined KBSs have features that result from a combination of closed and open systems, a combination of the KB and database, a combination of one KB and another, etc. This chapter is designed to help users develop an appreciation for KBS and its architecture. The proposed method can give a broad variety of knowledge-based techniques for supporting in making decisions and planning. This chapter presents a KB design process and an inference engine (IE) to build intelligent systems for use in education and engineering. For illustrating the effectiveness of this design method, it is used to build an intelligent support system that solves high-school-level solid geometry [11] and a consultancy system for housing architecture based on the regulations of architecture and construction [12]. The next section presents some related works for building intelligent systems based on a KB. Section 10.3 describes the general process to construct a KBS. Section 10.4 proposes the detailed process for designing the components of the KB and IE in KBS. There are some basic knowledge manipulations mentioned in those processes. Section 10.5 presents the application of the proposed method to build two supporting systems in the domains of learning and building: an intelligent problem solver applied in solid geometry at high school and a consultancy system for designing housing architecture based on the regulations of architecture. The last section concludes some main results of this chapter and presents some future works to improve those results.


Related work

At present, intelligent systems, which are applied in practical fields, require that the design of a KB fully represents knowledge in the field as well as solve common problems and problems that require the intensive application of knowledge [3,13]. Therefore the method of designing and organizing the KB is a very essential need for the process of building intelligent systems [14]. Besides, to solve knowledge problems, the system also needs to have a strong IE to be able to grasp and solve the problems [15]. The configuration of a KBS includes a set of concepts (objects) in the knowledge domain and their properties, a set of relations between the domain concepts, and a task specification describing the demands that a created configuration has to accomplish [16]. There are many studies to build KBs for intelligent systems. Some methods that have been used for KB configuration are as follows: rule-based systems [17], concept hierarchies [18,19], structure and constraint-based approaches [20], and model-based systems [15,21,22]. However, those methods have still not been applied for practical systems. The designing of a KBS involves three stages: (1) determining the design task, (2) constructing and initializing the design process model, and (3) executing the design process model [9,23]. This method can create an intelligent system with a certain KB. Nevertheless, this design method is too general; it is not really effective to construct a system applied in a practical knowledge domain. Yoshizumi et al. [10] have studied some methods to construct dynamic KBSs. Human activities have some dynamic aspects corresponding to the working environment. Each method is used to design a type of human activity. The knowledge in this study has an element called “tacit knowledge.” Tacit knowledge, also called implicit knowledge, is the kind of knowledge that is expressed when solving problems of the knowledge domain [24]. It is also a kind of experiential knowledge (heuristic). However, those results did not give a general process to build a usable KBS. Bobadilla et al. [25] have proposed some methods for designing a kind of intelligent system, known as the recommendation system. Memory-based methods [26,27] use similarity metrics to evaluate the distance between two objects and rate for k-nearest neighbors (kNN) [28,29]. Model-based methods use the information to create a model to generate the results of the system [22,30,31]. However, those results did not present a full process to build a complete intelligent system, from collecting data or knowledge to constructing and testing a practical program, and especially, they did not present how to organize the KB and IE of those systems. The method for designing KBS has to include the solutions to organize its KB and design its IE. Those constructed components need to satisfy some criteria for application [13,32], such as the KB’s completeness and consistency and the simulation of the human method by the inference process for problem-solving [15,33]. Besides, the method also ensures the usability of the created system. It can solve some practical problems of the applied scope.


Design the knowledge-based system

10.3.1 The architecture of a knowledge-based system Architecturally, according to [32,33], the KBS can include components such as the KB, the IE, the interface, explanation module, knowledge manager module, and working memory; the system serves two types of users, who mainly are

A methodology for designing knowledge-based systems and applications Chapter | 10


normal-type users or the knowledge engineer. Fig. 10.1 is a common architectural diagram of a KB, expert system, or intelligent problem-solving system. In the system’s components, according to the above diagram, the KB and the IE are the two core components. The KB contains the knowledge of the applied knowledge domain as the foundation for computational inference to solve problems or requirements posed for the system. The knowledge needs to be represented and stored in a readable, appropriate form and used by an IE to perform automatic inference in order to find solutions to problems, especially in general forms. The KB stores many knowledge components such as concepts and entities, relations, functions, operators, and facts and inference rules. The design of the KB requires the designer to have a firm grasp of the methods of knowledge representation to apply, modify, and research and develop those representational methods. The IE will implement problem-solving or inference strategies and find answers to the questions, based on the use of knowledge in the KB. The IE plays the role of demonstrating the system’s ability to “think,” simulating the human’s ability to think and solve problems. Technically, each step of the IE must know to find the appropriate rules and apply the rules to generate new facts (in the forward-chaining strategy) or new goals (in the backward-chaining strategy). There must be relative independence between the KB and the IE, meaning they are not bound organically. With the adjustment of either or both components, the system can still operate normally. The separation of knowledge from control mechanisms makes it easier to add new knowledge in the process of developing a program. This is a similarity of the IE in a KB and the human brain (processing control), which is consistent and does not change even though the individual’s behavior changes according to the newly acquired experience and knowledge. Assuming an expert uses traditional programs to assist with day-to-day tasks, the program behavior change requires them to know how to install the program. In other words, the expert must be a professional programmer. This limitation is solved when experts approach using the KBS. In the KBSs, knowledge is expressed explicitly, not in a hidden form as in the traditional programs. Therefore it is possible to change the KB, and the IEs will work on the updated knowledge to fulfill the new request of the experts [13]. The reasons why ensuring the separation between the KB and the IE is an important criterion: G


The separation of the knowledge of problem-solving and the IE will help the representation of the knowledge to operate in a more natural way, closer to human conception. The designers of intelligent systems will focus on capturing and organizing the KB rather than going into details for the installation on a computer.

FIGURE 10.1 Architectural diagram of a knowledge-based system.




Applications of Computational Intelligence in Multi-Disciplinary Research

This separation will enhance the modularization of the KB, IE, and the knowledge-updating system. The addition or removal of a part of the knowledge will not have marginal effects on the other components of the system. The same control and communication strategy is allowed to be used for many different systems. The separation of the KB and IE also makes it possible to test a variety of control strategies on the same KB.

The basic problem-solving operation of the system is as follows: the problem is entered into the system through the interface and the IE analyzes the problem and records the facts (hypothesis and goals) into the working memory. Following that, the IE performs the inference strategies for finding the answer or solution to the problem. Throughout the reasoning process, the IE looks for rules in the KB to apply to find the solution. For a system that solves plane geometry problems, for example, the system has a KB with the concepts of plane geometry such as relations, properties, theorems, and formulas. The user will enter the problem into the system including hypothesis and conclusions (or requirements). In order to find a solution to the problem, the IE performs an inference process to look for theorems, formulas, etc. to apply to the search for the answer. Then the solution of the problem is exported to the user. Learning-aid applications also have a need for instruction systems that solve problems, such as a problem-solving instruction system for plane geometry. In this system, in addition to the KB and the IE, there is also a module to guide how to solve exercises. These modules simulate the activities of the teacher when teaching the students to solve problems in system operations based on constructive algorithms, search algorithms, and solution optimization algorithm. The act of teaching students how to solve math problems demonstrates intelligent characteristics such as pedagogy and interactivity, which helps to create an interest in learning and to stimulate the creative thinking process of students. The application of instructions for solving plane geometry needs to meet these basic requirements, given as follows: G



The KB of the system stores all the knowledge and information of the knowledge domain about plane geometry including an intellectual repository and exercises that are categorized and linked with the KB [34]. There are features to support solving geometry exercises to ensure a number of requirements such as pedagogical psychology, interactivity, scientific properties, usage of the suitable KB with the knowledge of high school plane geometry, and a friendly interface [13,34]. There is a search function to look up the knowledge and exercises.

10.3.2 The process for designing the knowledge-based system Based on the architecture of the KBS, the process of designing the KBS includes the following stages and technical problems: G G G


Stage 1: Collect the knowledge and practical problems. Determine the scope of the knowledge domain with the demanded requirements. Determine the source of the collected knowledge domain. The collection sources are regular documents and books and the knowledge of experts. For selecting the experts, it is necessary to consider the selection criteria in accordance with the actual conditions. The collected knowledge usually includes concepts, objects, relationships, facts and inference rules, etc. Knowledge can be preliminarily classified as the foundation for the design of a KB of the system. Collect the practical problems, requirements, and questions. From that, the problems will be classified to design an IE. Stage 2: Design the KB. This stage uses the collecting results of Stage 1 to build the KB of the system. It has some steps:



Represent the collected knowledge that needs to be organized in the KB. This requires the application of knowledge models and studying methods of knowledge representation. Determine the organization of KB based on the built knowledge model. Design the basic knowledge manipulations of the KB and some basic tasks of the IE. Stage 3: Design the IE.

This stage is implemented based on the KB built in Stage 2 and the collected problems in Stage 1. The steps for designing an IE are as follows: G

Classify collected problems to determine some general kinds of problems. The models for problems are proposed based on the kinds of practical problems.

A methodology for designing knowledge-based systems and applications Chapter | 10




Select the inference strategies corresponding to the kind of problems, and design algorithms to the corresponding kind. This requires the application of deduction methods and studying to improve those methods suitably. Test, evaluate, and increase the inference ability for finding the solutions effectively. Stage 4: Design the user interface of the KBS. There are two kinds of users for a KBS: knowledge engineers and normal users.




Design the user interface for normal users to input the problems and requirements. It has a specification language similar to natural language for commuting to achieve human interactions easily. Design the user interface for the knowledge engineer managing the KB by using the module of knowledge management. Besides the input interface, it also has an interface showing the proof of problems. The solutions of the proofs are natural, similar to human expression.


Knowledge base and inference engine of a knowledge-based system

10.4.1 Design the knowledge base Based on the collection of knowledge and practical problems which have been solved in the KBS, the collected knowledge is represented to be stored in the KB of the system. This work requires intellectual engineers to use the methods of knowledge representation smoothly, flexibly, and creatively. Using the knowledge model, the engineers organize the KB of the system. In addition to the organized KB, some basic knowledge manipulations also have to be designed, such as updating the knowledge and unifying facts, as the foundation for designing the IE in the next step. Organize the knowledge base Knowledge representation is the most important work for designing a KB. In this work, the knowledge model must be studied to conform to the collected knowledge. The representing of knowledge accurately and fully helps the KBS to be equipped with a complete KB so that the system can understand the specified intellectual and use them to solve practical problems in the knowledge domain [1,2]. Knowledge representation is the building of a knowledge model for the knowledge domain. This model establishes how the knowledge stored on the computer is used to serve the knowledge process as well as design the IE for problem-solving [3,13,15] (Fig. 10.2). After the collection, the knowledge is described in the form of natural language. Thus some techniques have to be used to specify this knowledge, such as using visual diagrams (graphs, images), data structures (basic and advanced, abstract), mathematical structures, and specification language [13,33]. The methods for knowledge representation are constructed through the following steps [13]: G



Step 1: Analyze the collected knowledge to determine the required components of the knowledge. In this step, the knowledge can be classified into those components. This makes the selection of methods for knowledge representation easier. Step 2: Determine the relations between components in Step 1. Step 3: Select the model of knowledge conforming to the knowledge domain. The model can be improved from another model. The basic components of a knowledge model are concepts, relations between concepts, and inference rules. Step 4: Based on the above steps, construct a complete knowledge model for all collected knowledge by assembling, integrating, and combining the knowledge components. FIGURE 10.2 Knowledge representation.

Knowledge Base The knowledge domain K (natural language)

The knowledge domain K (specification language)


Applications of Computational Intelligence in Multi-Disciplinary Research

After the knowledge representation, the knowledge must be stored on the computer for it to become a KB of the KBS. This KB has to satisfy some criteria [13,15]: G G

Completeness: The knowledge is represented and organized on the computer sufficiently. Consistency: The organized KB ensures consistency. It does not conflict with other internal knowledge of the system. There are some techniques to organize the KB based on the knowledge model:



Method 1: Store the KB by using the system of structured text files based on the keywords and some basic grammar rules. Method 2: Store the KB by using relational databases (MySQL, SQL Server, MS Access, etc.) Method 3: Use a specification language, such as XML, RDF, or OWL, and some supporting tools for ontology. Basic knowledge manipulations This section presents some basic knowledge manipulations for the organized KB. These manipulations are the foundation to designing the IE of the KBS. Some basic manipulations are usually used in real-world KBSs: Updating the knowledge base In the real world, the knowledge can be changed and updated, so the designing of KB has a function for updating the stored KB. For updating the knowledge, the KB determines the conformation to the new knowledge, whether the updating affects the working of the system or not, and contradiction with the current knowledge.

Checking the consistency of the knowledge base

The knowledge must be consistent with other internal knowledge. It means the components of knowledge must conform to the structure of each component and the relations between them. Besides, the KB ensures that the knowledge and the IE do not create conflicting knowledge. Especially when updating the new knowledge of concepts, relations, facts, or rules, the consistency of the KB needs to be checked when designing it.

Unification of facts

Facts are features used to build inference rules of the knowledge domain as well as to design the IE. The unification of facts is a basic task when designing algorithms for solving problems. The function of unification is also used in other procedures, such as checking the application of rules and checking the consistency of the KB. This manipulation plays an important role in the designing of each intellectual component and all the KBs. The process of the unification of facts involves the following steps: G G G

Determine the kind of facts. Check the equivalence of facts based on their kinds. The checking process belongs to the knowledge domain. In the inference process, the complexity of the unification of facts affects the complexity of this process. Thus the unification of facts is studied and solved effectively.

There are many traditional and modern methods for knowledge representation [3537]. In fact, each method has strong points and weak points, and it is not really useful for application in the real knowledge domain. Thus designers determine a method or integrate it with other methods from the collected knowledge. Moreover, they also study the improvement and development of current methods to conform to practical knowledge domains. A full and accurate knowledge representation supports the designing of a KB that conforms to the requirements of the application, and a complete KB is also a good basis for designing an effective IE.

10.4.2 Design the Inference engine The process for designing the inference engine When designing a KBS, besides the organization of the KB, the IE is also very important. It plays as the reasoning process that simulates human thinking for searching for solutions to problems. This engine helps the system in solving problems in the represented knowledge domain.

A methodology for designing knowledge-based systems and applications Chapter | 10


The principles of an inference engine

The reasoning process of humans is a form of creative thinking, in which, from some knowledge, new knowledge is extracted. Thus the inference process must comply with certain principles [2,8,37]. Likewise, the IE of a KBS is designed to simulate this process, so this engine must be designed based on the following principles: G


Logical reasoning: The inference process of a KBS must satisfy the rules of logic. It also does not lead to a contradiction in the knowledge domain. Consistent with represented knowledge: The IE is designed based on the context of the specified knowledge, and the inference process must correspond to the KB of the system. Criteria of an inference engine There are two criteria for designing an IE: G


The IE must be relatively independent of the KB: Besides conforming to the specified KB, the inference process must be separate from the KB, that is, with the same goal for solving the same kind of problems, the deductive method of the system can be designed with many different KBs. The inference process simulates human problem-solving: The IE of a KBS shows the intelligence of the system. The method for problem-solving of the system has to simulate the human reasoning process. This makes the system more intelligent and highly interactive like a human. The process for designing an inference engine An IE is constructed based on the represented KB of the KBS for a certain knowledge domain. The process to achieve this design involves the following steps: G





Step 1: Collect practical problems Before designing an IE, practical problems of the knowledge domain need to be collected. For example, with a diagnostic system, the medical records are collected in detail, and with an intelligent problem solver in education, the common exercises of the course need to be collected fully. Step 2: Classify the collected problems and model them After collecting practical problems, they have to be classified. The classified kinds are the foundation to determining general problems of the knowledge domain. Following that, study the model for those general problems and design the corresponding algorithms. The models based on general problems are frames, or other models conforming to the KB. Step 3: Design inference algorithms The algorithms for solving general problems are designed based on the combination of basic deductive strategies and heuristic rules. Those designing methods perform the intelligent problem-solving of the system. They use the KB as the core for the reasoning processes. Step 4: Implement and test The next step is implementing the designed algorithms by using some programming tools, such as Maple, C#, and Python. After that, the program has to be tested on collected practical problems and other problems. Thereby, the evaluation of those algorithms is based on accuracy, complexity, ability to problem solve, etc. Step 5: Increase the effectiveness of the inference engine

The IE has to be updated regularly to increase its effectiveness. The improvement is performed by studying heuristic rules [6] or combining the A* algorithm [5] when searching for solutions. Those enhancement methods have to be used flexibly to best respond to the requirements of problem-solving. The reasoning methods The deductive methods are designed based on the reasoning process of humans. There are three kinds of human reasoning: G



Deductive reasoning: The process of reasoning from one or more statements to reach a logically certain conclusion based on the rules of the knowledge domain [5,6]. Inductive reasoning: This reasoning method uses the premises of supplying some evidence but not fully ensuring the correctness of the conclusion [5,6]. Analogical reasoning: The process of reasoning based on the idea that because two or more things are similar in some aspects, they are probably also similar in some further aspects [38].


Applications of Computational Intelligence in Multi-Disciplinary Research

Based on that reasoning, some inference processes for the KBS are proposed: G



Forward chaining: From the set of determined facts, inference rules of the knowledge domain are used to get new facts. This process is iterated until the goals of the corresponding problem are achieved. Backward chaining: From the goals of the problem, the inference rules are applied backward to arrive at the hypothesis of the current problem. Case-based reasoning: The method finds a solution by revision of the solutions of similar problems. Reasoning based on heuristic rules, knowledge of sample problems, and patterns: The inference process combines the use of heuristic rules. Besides, the study of patterns and sample problems of knowledge simulates the human reasoning process and gets the target facts quickly.

Forward chaining

Forward chaining simulates the method of deductive reasoning. The main idea behind this chaining: it begins with a set of determined facts and extracts new facts by applying rules which have their hypothesis matching known facts. This processing is iterated until the goals of the current problem are achieved. The process of forward chaining is shown in the following diagram: r1




F0 ! F1 ! F2 ! ::: ! Fm in which ri is an inference rule of the knowledge domain; F0 is the initial set of facts; it is the hypothesis of the problem; Fi : 5 Fi21 , gðri Þ, where g(ri) is the set of facts deduced from rule ri. (1 # i # m). Example 1: In the case of a knowledge domain pertaining to circles, there are some rules:

r1 :d 5 2r r3 :P 5 2πr

r2 :x 5 2rsin

α 2

r4 :S 5 πr 2

where, d, r: diameter and radius of the circle, respectively;α: central angle of the circle;x: a chord of the circle;P, S: perimeter and area of a regular circle, respectively. From the factors α and d, the inference process to compute area S and length of chord x is as follows: r1



F0 ! F1 ! F2 ! F3 with F0 5 {α, d}F1 5 {α, d, r}F2 5 {α, d, r, x}F3 5 {α, d, r, x, S}.

Backward chaining

This method is performed by tracing the goal back to the hypothesis of the problem by applying rules in the KB. This backward inference will generate a tree with a rollback mechanism. The solution will be found when all the goals at the leaf nodes belong to the given facts. Backward chaining corresponds to a depth search on the graph representing the set of rules. Compared to backward chaining, forward chaining is simpler. However, forward chaining has to use all inference rules without considering the rules that are relevant to the goals of the problem. Therefore this algorithm leads to a combinatorial explosion when the KB has a large number of inference rules. In contrast, backward chaining is more complex, but it only uses rules related to the goals.

A methodology for designing knowledge-based systems and applications Chapter | 10


Reasoning with pattern problems and sample problems

A pattern problem is a problem that has a simple solution and high frequency in the knowledge domain. It is a small problem, and its solution is not complex, but it usually appears when solving practical problems in the knowledge domain [39]. A sample problem is a high-frequency problem that is time consuming when using the general inference method to solve it. Its solution is recorded. When solving some practical problems similar to a sample problem, the sample problem is used to solve the current problem based on its recorded solution. If the current problem has little difference as compared to a sample problem, the solution of the current problem can be found more quickly [40]. Example 2: In the knowledge domain of plane geometry, there is a sample problem P: “In a circle with the center I and radius r, CD is a diameter and ND is a chord of the circle (I).” The stored solution of P is: s1: {CD diameter Circle(I)} - {I midpoint CD, C belong Circle(I)} s2: {ND chord Circle(I)} - {N belong Circle(I), D belong Circle(I)} s3: {N belong Circle(I), CD diameter Circle(I)} - {ΔNCD: Right Triangle} s4: {ΔNCD: Right Triangle, I midpoint CD} - {NI 5 CD/2}. If the inference processing is posed with problem P, it will add four steps mentioned above into the processing. If the goals are not obtained, the process continues to deduce until the goals are achieved. The system also updates new solution steps into the problems. Fig. 10.3 represents the reasoning method using pattern and sample problems that are related to the current problem for solving it [40,41]. This method simulates human reasoning, which has higher accuracy, for problem-solving. It gets results better than the other methods [7]. To solve an input problem, this reasoning uses sample and pattern problems to

FIGURE 10.3 The reasoning that uses pattern and sample problems.


Applications of Computational Intelligence in Multi-Disciplinary Research

update the set of facts of the current problem. The sample problems are searched based on the unification of kinds of facts in the sample and facts in the current problem. After that, this reasoning uses the combination of forward and backward chaining by searching for inference rules that can be applied to the current set of facts. It also uses heuristic rules to select inference rules more effectively. The deductive process will be stopped if it achieves the goals to solve the problem or if it cannot get new facts in the reasoning.



10.5.1 Design an intelligent problem solver for solving solid geometry at high school Collect the knowledge domain The knowledge domain pertaining to high school solid geometry is collected from books [7]. The collected knowledge is classified as follows: G


Conceptual knowledge: In this application, some concepts of this domain are considered, such as point, segment, line, plane, triangle, quadrilateral, parallelogram, square, triangular pyramid, and quadrilateral pyramid. Each concept has a structure and behaviors in itself. Relational knowledge between concepts: Some relations in this knowledge domain are as Table 10.1. Functional knowledge: There are some functions for this knowledge domain as Table 10.2. Rules of the knowledge domain: Some of the collection rules of the knowledge domain are given as follows: Rule 1: If a line is perpendicular to a plane, then that line is perpendicular to all lines on that plane. 8 < a; b:Line a\PlaneðPÞ .a\b : bCPlaneðPÞ

TABLE 10.1 Relations between concepts of solid geometry. Order




Point belongs Segment


Point belongs Line


Point belongs Plane


Line belongs Plane


Line cuts Line



Line cuts Plane



Two lines are cross



Three points are straight



Line parallel Line



Line parallel Plane



Plane parallel Plane



Line perpendicular Line



Line perpendicular Plane



Point is the center of Parallelogram


Point is the center of Square


Point is the centroid of Triangle

A methodology for designing knowledge-based systems and applications Chapter | 10


TABLE 10.2 Some functions of solid geometry. Order




Line IntersectionLine(P::Plane, Q::Plane)



Point IntersectionPoint(d::Line, P:: Plane)



Point IntersectionPoint(a:: Line, b:: Line)



Point Midpoint(A::Point, B::Point)



Point Projection(A:: Point, a:: Line)


Point Projection(A:: Point, P:: Plane)


Line Projection(d:: Line, P:: Plane)


Line CommonPerpendicularLine(d:: Line, d1:: Line)



Real Distance(A:: Point, d:: Line)



Real Distance(A:: Point, P:: Plane)



Real Distance(d:: Line, d1:: Line)


Rule 2: If a line a does not belong to a plane (P) and it is also parallel to another line in (P), then line a is parallel to the plane (P). 8 a; b:Line > > < agPlaneðPÞ .a==PlaneðPÞ bCPlaneðPÞ > > : a==b Rule 3: If a point belongs to two planes, then it belongs to the intersection line of those planes. 8 < K:Point KAPlaneðPÞ .KAPlaneðPÞ - PlaneðQÞ : KAPlaneðQÞ G

Rules of the knowledge domain: Some of the collection rules of the knowledge domain are given as follows: Some of rules for generating new objects: Rule 4: Properties of the intersection line between two planes. 8 MAPlaneðPÞ - PlaneðQÞ

> > < aCPlaneðPÞ Let d 5 IntersectLineðP; QÞ: . bCPlaneðQÞ d==a; d==b > > : a==b

Rule 5: In a triangle ABC, a point M belongs to AB, a point N belongs to BC. A new point P is generated with P 5 CM intersect AN, which means P belongs to CM and P belongs to AN. 8 TriangleðABCÞ

> > < M; N:Point Let P:Point; and P 5 IntersectPointðCM; ANÞ: . MAAB PACM; PAAN > > : NABC Some kinds of exercises from high school solid geometry in Ref. [7] are considered. Those kinds are: G G

Prove a relation between two objects. Compute the value of a function related to objects. G Functions return a value; G Functions return an object.


Applications of Computational Intelligence in Multi-Disciplinary Research Build the knowledge model The knowledge model of solid geometry includes 5 components (C, H, R, Funcs, Rules) and 12 kinds of facts [3,32]. The details of each component for this knowledge model are shown as follows: 1. C—set of concepts Set C is the set of concepts of the knowledge domain of solid geometry. Each concept has the structure of computational objects [7] and its behaviors for solving problems on itself. There are some types of computational objects for this knowledge domain: G

Basic object: Point. First-order objects: Segment, line, angle, plane. G Second-order objects: Triangle and kinds of triangle, quadrilateral and its kinds. G Third-order objects: Triangular pyramid, quadrilateral pyramid. 2. H—set of hierarchical relations: G

The hierarchical relations between concepts in C can be represented by a Hasse diagram. Fig. 10.4 shows the Hasse diagram of hierarchical relations between the quadrilateral and its kinds. 3. R—set of relations between concepts Set R is the set of binary relations between concepts in C. Those relations are not hierarchical relations. Example 1: Relations perpendicular between two planes, two lines, or between a line and a plane. Relations belong between a point and a line, or a point and a plane. Relations belong between a point and a line, or a point and a plane. “Relation the height between a segment and a triangle, or a segment and a pyramid.” 4. Funcs—set of functions Set Funcs is the set of functions on computational objects. Example 2: The IntersectionPoint functions between two lines, or a line and a plane return a point. The IntersectionLine function between two planes returns a line. The Projection function between a line and a plane returns a line. And the Projection function between a point and a line returns a point. Specification of a function: function-def: 5 FUNCTION name; ARGUMENT: argument-def 1 RETURN: return-def; [constraint] [facts] ENDFUNCTION; FIGURE 10.4 Hasse diagram for the quadrilateral and its kinds.

A methodology for designing knowledge-based systems and applications Chapter | 10


Example 3: The specification of the IntersectPoint function between a line and a plane: function-def: 5 FUNCTION IntersectPoint; ARGUMENT: d:Line, P: Plane RETURN: M: Point; Constraint: Not(d belong P) IntersectionPoint(d, P) M belong d. M belong P ENDFUNCTION; 5. Rules—set of inference rules The set Rules is the set of deductive rules. Each rule has the form “if ,facts. then ,facts..” There are four kinds of rules in solid geometry: G G


Kind 1: Natural rules. The rules that can be deduced by a human without proving or explaining. Kind 2: Rules for determining an object. The rules that determine the existence of an object based on facts or other objects. Kind 3: Rules for generating new objects. The rules that generate new objects such as a point or line. Kind 4: Other rules.

Example 4: : (Kind 2) Rule 1: Three points, which are not straight, determine a plane. {A, B, C: Point; not(A, B, C straight)} - {Determine Plane(ABC)} (Kind 3) Rule 2: If two coplane lines intersect, there exists a point is the intersect point.

d1 ; d2 :Line; P:P lane; . 'M:Point; M 5 IntersectPointðd1 ; d2 Þ d1 AP; d2 AP; d1 intersect d2

(Kind 4) Rule 3: If a line has a point that does not belong to a plane, it does not belong to the plane.

d:Line; A:Point; P:Plane; A= 2P; AAd .fdgPg Organize the knowledge base The KB of this knowledge domain is represented by the knowledge model as described in Section 10.1.2. It is organized by the system of structured text files with a specification language. Those text files are: G G G G G G

The file Object_Kinds.txt, which stores the names of concepts of the knowledge domain; The files in the form “, Name_of_concept..txt,” which store the structure of the corresponding concept; The file Relations.txt, which stores the relations between concepts; The file Hierarchy.txt, which stores the Hasse diagrams for the hierarchical relations between concepts; The file Functions.txt, which stores the definitions of functions in the knowledge domain; The file Rules.txt, which stores inference rules of the knowledge domain.

Example 5: The structure of the file Quadrilateral_Pyramid.txt describing the knowledge of a quadrilateral pyramid: File Quadrilateral_Pyramid.txt begin_concept: QuadrilateralPyramid[S,A,B,C,D]; S, A, B, C, D: Point begin_variables p_SAB: Plane[S, A, B]; p_SBC: Plane[S, B, C];


Applications of Computational Intelligence in Multi-Disciplinary Research

p_SDC: Plane[S, D, C]; p_SAD: Plane[S, A, D]; p_ABCD: Plane[A, B, C, D]; SA: Line[S, A]; SB: Line[S, B]; SC: Line[S, C]; SD: Line[S, D]; AB: Line[A, B]; BC: Line[B, C]; AC: Line[A, C]; CD: Line[C, D]; DA: Line[D, A]; SA_: Segment[S, A]; SB_: Segment[S, B]; SC_: Segment[S, C]; SD_: Segment[S, D]; AB_: Segment[A, B]; BC_: Segment[B, C]; AC_: Segment[A, C]; CD_: Segment[C, D]; DA_: Segment[D, A]; end_variables begin_construction_properties QuadrilateralPyramid[S, A, B, C, D] 5 QuadrilateralPyramid[S, B, C, D, A] QuadrilateralPyramid [S, A, B, C, D] 5 QuadrilateralPyramid [S, C, D, A, B] QuadrilateralPyramid [S, A, B, C, D] 5 QuadrilateralPyramid [S, D, A, B, C] end_construction_properties begin_properties ["belong", S, "Object"] ["belong", A, "Object"] ["belong", B, "Object"] ["belong", C, "Object"] ["belong", D, "Object"] ["co-plane", A, B, C, D] ["cross", SA, BC] ["cross", SA, CD] ["cross", SB, DA] ["cross", SB, CD] ["cross", SC, DA] ["cross", SC, AB] ["cross", SD, AB] ["cross", SD, BC] Not["belong", S, p_ABCD] Not["belong", A, p_SBC] Not["belong", A, p_SDC] Not["belong", B, p_SAD] Not["belong", B, p_SDC] Not["belong", C, p_SAD] Not["belong", C, p_SAB] Not["belong", D, p_SAB] Not["belong", D, p_SBC] end_properties end_concept Model of problems

A methodology for designing knowledge-based systems and applications Chapter | 10


The model of problems in the knowledge domain of high school solid geometry has the form: (O, F)-G where O is the set of objects of the problems, F is the set of facts, and G is the set of goals. Each goal is: G G G G

Proving a relation between two objects. Determining the value of a function related to objects. Functions return a value; Functions return an object.

Example 6: Given a triangular pyramid S.ABC. Let M, N, and P be midpoints of segments AB, BC, and CD, respectively. Determine the intersection point Q between line AD and plane (MNP). 1 Model of this problem:

Triangular Pyramid½S; A; B; C;

M; N; P:Point M midpoint AB; N midpoint BC; F: 5 P midpoint CD G: 5 fDetermine:IntersectPointðlineðADÞ; planeðMNPÞÞg

O: 5 Design the inference engine The algorithm for solving problems in this knowledge domain is designed based on a combination of forward and backward chainings, in which objects have the ability to reason themselves. The inference process also integrates heuristic rules to increase its effectiveness. Those heuristics rules speed up the process. Some of the heuristic rules are using sample problems that are similar to the corresponding problem [40] and rules for the limitation of generating new objects. Fig. 10.5 illustrates the working of the algorithm for solving problems in the knowledge domain of solid geometry. At first, it collects the required information from the hypothesis of the problem, and then it initializes the solving method. When solving the current problem, the algorithm uses geometric objects as agents to solve problems by themselves for determining new facts. After that, the IE uses heuristic rules to select some rules which can be applied to the current status of the problem being solved. Using selected rules, the algorithm continues to deduce to get more new facts until the goals are reached or it does not have any rules that can be applied. Finally, the problem will be solvable if all its goals can be obtained, and otherwise, the problem will be unsolvable. Testing The intelligent problem solver to solve high school solid geometry problems has three components: user interface, KB, and module for solving problems. The user interface is implemented by using HTML and JavaScript. The module for solving problems is set up by using Maple, and C# is used to connect the user interface and the solving module. The KB is organized using the system of structured text files. The user interface of this system is shown in Fig. 10.6 Example 7: The proof of Example 6 is found by using this program: The proof of this system: Step 1:{M midpoint AB}- {M belong AB}Step 2:{M belong AB, AB belong plane(ABD)}- {M belong plane(ABD)}Step 3:{M belong plane(ABD), M belong plane(MNP)}-{M belong IntersectionLine(plane(ABD), plane(MNP))}Step 4:{P midpoint CD}- {P belong CD}Step 5:{P belong CD, CD belong plane (ACD)}- {P belong plane(ACD)}Step 6:{A belong plane(ABD), A belong plane(ACD)}-{A belong IntersectionLine(plane(ABD), plane(ACD))}Step 7:{D belong plane(ABD), D belong plane (ACD)}-{D belong IntersectionLine(plane(ABD), plane(ACD))}

Step 14:{d1 5 IntersectionLine(plane(ACD), plane(MNP)),AC belong plane(ACD), MN belong (MNP),AC // MN}-{d1 // AC, d1// MN}Step 15:Let d2: line,d2 5 IntersectionLine(plane(ABD), plane (MNP))Step 16:{d2 5 IntersectionLine(plane(ABD), plane(MNP))}{d2 belong plane(ABD),d2 belong plane(MNP)}Step 17: {d2 5 IntersectionLine(plane(ABD), plane(MNP))M belong IntersectionLine(plane(ABD), plane(MNP))}- {M belong d2}Step 18:{M belong d2, M belong MN}- {M 5 IntersectionPoint(d2, MN)}Step 19:{M 5 IntersectionPoint(d2, MN)} - {d2 intersec MN}


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 10.5 The algorithm for solving problems in the knowledge domain of solid geometry.

Step 8:{P belong plane(ACD), P belong plane(MNP)}-{P belong IntersectionLine(plane(ACD), plane(MNP))}Step 9:{A belong IntersectionLine(plane(ABD), plane(ACD),D belong IntersectionLine(plane(ABD), plane(ACD)))}-{line(AD) 5 IntersectionLine(plane(ABD), plane(ACD))}Step 10:{M midpoint AB, N midpoint BC}- {MN // AC}Step 11:Let d1: line, d1 5 IntersectionLine(plane(ACD), plane(MNP))Step 12: {d1 5 IntersectionLine(plane(ACD), plane(MNP))}- {d1 belong plane(ACD),d1 belong plane(MNP)}Step 13:{d1 5 IntersectionLine (plane(ACD), plane(MNP)),P belong IntersectionLine(plane(ACD), plane(MNP))}- {P belong d1}

Step 20:{d2 belong plane(MNP), MN belong (MNP),d1 belong (MNP), d1 // MN,d2 intersec MN}- {d1 intersec d2}Step 21:{d1 intersect d2}- {H: point, H 5 IntersectionPoint(d1, d2)}Step 22: {H 5 IntersectionPoint(d1, d2)}- {H belong d1, H belong d2}Step 23:{H belong d1, d1 belong plane(ACD)}- {H belong plane (ACD)}Step 24:{H belong d2, d2 belong plane(ABD)}- {H belong plane(ABD)}Step 25:{H belong plane(ABD), H belong plane (ACD)}-{H belong IntersectionLine(plane(ABD), plane(ACD))} Step 26:{H belong IntersectionLine(plane(ABD), plane(ACD))line (AD) 5 IntersectionLine(plane(ABD), plane(ACD))}-{H belong line (AD)}Step 27:{H belong d2, d2 belong plane(MNP)}- {H belong plane(MNP)}Step 28:{H belong line(AD), H belong plane(MNP)}{H 5 IntersectionPoint(line(AD), plane(MNP))}

10.5.2 Consultancy system for designing housing architecture In Ref. [42], some solid geometry theorems are proved using the coordinate-free method based on volume. This method uses geometric variants between signed volume and area method to find solutions of theorems. Those solutions are readable and are represented step-by-step (Fig. 10.7). Table 10.3 compares the ability of the proposed method and the coordinate-free method in Ref. [42] to organize the KB of solid geometry.

A methodology for designing knowledge-based systems and applications Chapter | 10


FIGURE 10.6 (A) The user-interface of the intelligent problem solver in solid geometry. (B) The proof of the system.

FIGURE 10.7 Representation of example 6.


Applications of Computational Intelligence in Multi-Disciplinary Research

TABLE 10.3 Organizing the knowledge base. Concepts



Quantity of knowledge in [7]




Knowledge base of the proposed method

27 concepts (87%) include:G 5 basic concepts: point, line, segment, plane, angle;G 13 concepts about objects in plane geometry: circle (1), triangle and its kinds (5), quadrangle and its kinds (7);G 9 concepts about polyhedrons in solid geometry: 1 for sphere, 4 kinds of pyramid, 4 kinds of prism;

61 relations (88%) include:G 33 relations between objects of plane geometry.G 28 relations between objects of solid geometry.

137 rules (87%) include:1. Deductive rules:G 51 rules of plane geometry;G 18 rules about parallel relation;G 29 rules about perpendicular relation;G 10 rules about angles.2. Rules for generating a new object:G 17 rules for generating a new point in parallel relation;G 8 rules for generating a new point in perpendicular relation.3. Equivalent rules: 4 rules.

Knowledge base of the coordinatefree method [42]

B13 concepts (41%) include:G 5 basic concepts: point, line, segment, plane, angle;G B5 concepts about objects in plane geometry: triangle, quadrangle, circle.G B3 concepts in solid geometry: tetrahedron, sphere.

B23 relations (33%) include:G Relations between objects of plane geometry;G Relations between objects of solid geometry, such as intersection, parallel perpendicular.

B70 rules (44%); it also includes some computing rules, such as Pythagorean differences of triangles, signed volumes of tetrahedrons, formulas to compute ratio, area, volume.

TABLE 10.4 The ability to solve exercises of the methods. Exercises

Number of exercises

Number of solved exercises Proposed method

Coordinate-free method

Kind 1




Kind 2




Kind 3




Kind 4








There are 83 exercises collected from books [7] for testing. Those exercises are classified into the following kinds: G G G G

Kind 1: Exercises about determining an intersection point of a line and a plane or an intersection line of two planes. Kind 2: Exercises about parallel relations between geometric objects. Kind 3: Exercises about perpendicular relations between geometric objects. Kind 4: General problems, which use properties of intersection and perpendicular and parallel relations for solutions.

Table 10.4 compares the ability of the proposed method and the coordinate-free method in Ref. [42] to solve exercises from these kinds of courses. Both methods give solutions that are readable and represented step-by-step. The reasoning of the coordinate-free method uses geometric variants of the signed volume and area method; thus most of the exercises, which are transformed into area and volume, can be solved by using it. Hence, its proofs are still machinery and not suitable to help learners in studying. In contrast, the reasoning of the proposed method uses the knowledge of solid geometry to deduce new facts in its inference process directly. The method can solve exercises about perpendicular and parallel relations. Thus it can get the proofs that have reasoning similar to that of students, so it is useful for supporting the learning.

A methodology for designing knowledge-based systems and applications Chapter | 10

177 Organize the knowledge base of the consultancy system In modern society, as the economy develops, the demand for housing is more diverse. Housing architecture is associated with material life, natural geography, and spirituality. Therefore for satisfying the factors of safety and aesthetics and harmonizing with the common space, nature space, and feng-shui, the housing architecture requires consultancy from and design of experts who are architects. Cheung et al. [43] proposed a set of common criteria for architectural projects. Besides, almost all present methods for building a project management consultant system are constructed through the analytical hierarchy process [4446]. For example, the Architectural Consultant Selection System (ACSS) is a program evaluating prospective consultant architects and was constructed based on the analytical hierarchy process [43]. ACSS provides an efficient and consistent method for the selection of architects. However, it has not yet been established through building legislation, especially legal documents, and law application documents. In this section, a consultancy system for designing housing architecture in Vietnam is proposed. This system is built based on the regulations of housing architecture in Ho Chi Minh City. It also has to meet the requirements to support the consultation of housing architecture: Firstly, the system has to be equipped with the local regulations of housing architecture in Vietnam. It must have the full information for consulting, such as the direction of the house, room spaces, balcony, garden, and stair style. Secondly, the system supports searching similar architectures, which need to be visualized for the investor. Design the inference engine of the consultancy system The regulations of housing architecture and construction are collected from Ref. [12]. They are applicable for designing city houses in Ho Chi Minh City (Vietnam). The KB of the consultancy system is organized by a knowledge model including five components, given as follows: (C, Funcs, Rules, Pros, P) 1. C—Set of concepts Each concept in C is a class that has the structure and is classified as follows: String variables: for example, the variable “direction” is a string. G Basic objects: These objects have real-value and string attributes. For example, the object “Washing room” includes attributes that are real and string values. This object is used to describe other objects “Equipment of Washing room.” G First-order objects: These objects have attributes that are real values, string values, and basic objects. For example, the object “floor” has attributes that are basic objects, such as “washing room” and “living room”. G Second-order objects: These objects have attributes that are real values, string values, basic objects, and first-order objects. For example, the object “List of floors” has attributes that are first-order objects, such as “floor.” The structure of each object includes: (Attributes, Conditions, Com-relation, InnerRules) G


Attributes: Set of attributes which are real values, string values, basic objects, and other objects that have lower orders. Conditions: Set of conditions for each attribute. G Com-relation: Set of computational relations on attributes. Based on these relations, an attribute can be determined from other attributes of the same objects. G InnerRules: Set of deductive rules related to attributes of the corresponding objects. 2. Funcs—Set of functions on concepts in C G

The functions are used for reasoning and computing an object. The kinds of functions are given in Table 10.5. 3. Rules—Set of inference rules of the knowledge domain Each rule in Rules has the form: {p1, p2, . . ., pn} . {q1, q2, . . ., qm}, prior, in which pi and qj are facts of the knowledge domain and prior is the priority level of the corresponding rule. The kinds of facts are: G Determining an object. G Determining an attribute of an object. G Determining an attribute with the conditions of its bound. G Determining an object with the conditions related to another object. G Changing an attribute of an object by applying a function. G Calling a function to compute.


Applications of Computational Intelligence in Multi-Disciplinary Research

TABLE 10.5 The functions of the knowledge base. Kind




The function for basic object and real-value variable.

,Name of function . (basic object, real)



The function for first-order objects.

,Name of function . (first-order object)

Area(floor): Compute the sum of room areas in the floor.


The function for first-order object and real-value variable.

,Name of function . (first-order object, real)

Decrease(floor, x): Decrease the area of rooms in the floor by value x.


The function for basic object and first-order object.

,Name of function . (basic object, first-order object)

Insert(room, floor): Insert the room into the floor.


The function for first-order object and second-order object.

,Name of function . (first-order object, second-order object)

Add(room, List of floors): Add rooms into floors of the List of floors.

If all facts in the hypothesis of a rule are activated, the rule will be activated. The rule with a higher prior value will be used first. The values of prior in this knowledge domain are: Highest priority: 15 Next priority: 10 Average priority: 5 Lowest priority: 0 (it also is the default value of prior). The inner rules of an object have the highest priority; their values are 15. If rules must be first calculated in the process (those rules are presented in Ref. [12]), their values are 10. The rules describe the expert knowledge have priorities are 5. 4. Pros—Set of solutions Each solution includes a perspective drawing and a construction drawing. The construction drawing has a list of objects: basic objects, first-order objects, and second-order objects. For example, the construction drawing of “Ground floor” has the object “living room” with the attributes: area 5 25 m2 and floor 5 1, and the object “dining room” has the attributes: area 5 20 m2 and floor 5 1. The perspective drawing has attributes that are code of solution and a list of objects. 5. P—Set of criteria for searching a solution. Each criterion in set P has the structure: (ObjSearch, ListAttr, Point), where ObjSearch: Information of the searching object; ListAttr: List of attributes of the searching object and the variant for each attribute; Point: The value of the corresponding criterion. Design the inference engine of the consultancy system Based on the KB of the consultancy system, there are two problems in the consultation of housing architecture: Problem 1: : Find the solution of the problem H - G, in which: H: The hypothesis of the problem. It is a set of determined attributes of an object. G: The goal of the problem. It is a set of attributes that need to be computed. Problem 2: : From problem H - G, search the designing method satisfying the results of G. This method includes: a perspective drawing and a construction drawing. A. Algorithm for solving problem 1 Input: The set of H in the problem H - G Output: Compute values of attributes in G. Algorithm 1 determines the values of attributes in goal G by applying rules in the KB. Firstly, the IE of the system searches the inner rules of objects that can be applied to the set of facts. Secondly, if goal G has not yet been

A methodology for designing knowledge-based systems and applications Chapter | 10


determined, the algorithm will continue to search inference rules of the knowledge domain. Those rules are represented by the regulations of housing architecture and construction. Algorithm 1: Step 0: Initialize FactSet: 5 H; # set of facts Found: 5 false; flag: 5 true; Sol: 5 {}; Step 1: if G C FactSet then Found: 5 true; Goto Step Else Goto Step 2. Step 2: while flag and not(Found) do flag: 5 false; Apply rules in InnerRules of each object in FactSet to get a new fact. If (a new fact is generated) then flag: 5 true; rforms: 5 set of rules in InnerRules were used; news: 5 set of new facts which were generated by applying rForms on FactSet; FactSet: 5 FactSet , news; Solution: 5 Sol , {[rforms, news]}; If G C FactSet then Found: 5 true; Apply inference rules in Rules set to get a new fact. If (a new fact is generated) then flag: 5 true; rforms: 5 set of inference rules in Rules were used; news: 5 set of new facts which were generated by applying rforms on FactSet; Solution: 5 Sol , {[rforms, news]}; If G C FactSet then Found: 5 true; Step 3: If Found then Sol is the solution to the problem; Else The problem cannot be solved. B. Algorithm for solving problem 2 Definition 1: Let two objects obj1, obj2 A C: The attribute attr of object obj1 is consistent with an attribute of object obj2 if: (i) Objects obj1 and obj2 are of the same kind, and (ii) The name of attr is similar to the name of an attribute of object obj2. Definition 2: Given an object objA C, a solution s A Pros, and an object o in s: An attribute attr of the object obj matches an attribute of object o if: (i) Attribute attr is consistent with an attribute of object o, and (ii) | obj2.attr—obj1.attr | # δ, where δ is a constant of the knowledge domain. Definition 3: Given two solutions A, B A Pros and a set of facts F: Solution A is better than solution B if the number of attributes matching A is more than the ones matching B. Problem 2: Given the knowledge domain (C, Funcs, Rules, Pros, P) about housing architecture: For solving the problem H - G in kind 1, the system searches the designing method to satisfy the results of G. This method includes: a perspective drawing and a construction drawing.


Applications of Computational Intelligence in Multi-Disciplinary Research

Input: The problem H - G as kind 1. Output: Show k solutions in Pros which are the k-best solutions for the results of G. From the requirements of the problem, algorithm 2 determines a set of criteria matching with those requirements through the values of attributes computed in Problem 1. After that, using sample solutions in Pros, the algorithm computes the matching point of each solution against the criteria and modifies the suitable solutions if needed. Finally, the algorithm ranks the list of solutions by matching points. Algorithm 2: Step 0: Initialize sumPoint is the set of members that is a tube (p, pointp) in which p A Pros, and pointp is the point of solution p when computing the matching point for each attribute in G and solution p. FactSet: set of facts Step 1: Compute the point for each solution in Pros based on the criteria in P and results in G. For each p in Pros pointp: 5 0 for each object obj in the solution p if (all attributes of obj are consistent with some attributes of another object o in G) then Search a criterion cobj in P which has cobj.ObjSearch 5 obj if (all attributes of the object o match some attributes of the object obj) then pointp: 5 pointp 1 cobj.Point; sumPoint: 5 sumPoint , {(p, pointp)}; Step 2: Sort the solutions in sumPoint by descending pointp. Show k solutions in sumPoint. Testing The consultancy system for designing housing architecture is implemented using C#. The user interface of this program is given in Fig. 10.8. Example 8: The example is obtained from Ref. [47]. The user inputs information about the house: Information about the place: Location: Ho Chi Minh city. Area: countryside. Kind of street: small street. FIGURE 10.8 The user interface of the consultancy system for designing housing architecture.

A methodology for designing knowledge-based systems and applications Chapter | 10

Neighbors: center, mall. Information about building line: Line 1 5 12.0 m. Line 2 5 0.0 m. Line 3 5 0.0 m. Return line 5 1. Information about the shape of ground: Overall area 5 40.0 m2. Constructing area 5 40.0 m2. Width 5 4.0 m. Length 5 10.0 m. Style: city house. Roof: flat. Direction: south. The program consults to the design of housing architecture in the following way: Style: Based on the regulations of QÐ1352007-UBNDTP-081207: Maximum number of floors 5 4. Proposed number of floors 5 2. Setback space of top floor 5 0 m. Standard front elevation 5 25.0 m. The height of floors 5 2 m. Total ground area 5 78.5 m2. Information about the first floor: Minimum height of floor 5 3.0 m. Maximum height of floor 5 7.0 m. Proposed height of floor 5 3.0 m. Total ground area 5 40.0 m2. Using area 5 25.0 m2. Number of rooms 5 5. List of rooms: Rest room, kitchen, living room, working room, entertainment room. Information about the second floor: Minimum height of floor 5 2.5 m. Maximum height of floor 5 3.5 m. Proposed height of floor 5 2.5 m. Total ground area 5 38.5 m2. Using area 5 27.0 m2. Number of rooms 5 3. List of rooms: Rest room, kitchen, personal bedroom, large bedroom. Number of clear floors 5 1: Minimum area 5 1.5 m2. Reality area 5 1.5 m2. Belong 5 second floor. Number of large bedrooms 5 1: Minimum area 5 15.0 m2. Reality area 5 15.0 m2. Belong: second floor. Direction: east. Number of restrooms 5 2: Minimum area 5 2.0 m2. Reality area 5 2.0 m2. Belong: first floor, second floor. Direction: west. Living room: Minimum area 5 6.0 m2.



Applications of Computational Intelligence in Multi-Disciplinary Research

Reality area 5 7.0 m2. Belong: first floor. Direction: east. Should be near the kitchen and working room. Kitchen: Minimum area 5 3.0 m2. Reality area 5 7.0 m2. Belong: first floor. Direction: west. Should be near the living room. Working room: Minimum area 5 5.0 m2. Reality area 5 5.0 m2. Belong: first floor. Direction: east. If it is outside work, the room should be near the living room. If it is private work, the room should be near the bedroom. Corridor: Minimum area 5 1.0 m2. Belong: second floor. Direction: west. Terrace: Minimum area 5 7.5 m2. Belong: second floor. Position: front. Garden: Minimum area 5 0 m2. Belong: first floor. Direction: west. Stairs: Type: U-shaped stairs. Minimum area 5 4.0 m2. Reality area 5 4.0 m2. FIGURE 10.9 The perspective drawing of Example 8 [47].

A methodology for designing knowledge-based systems and applications Chapter | 10


FIGURE 10.10 (A) The construction drawing of the ground floor. (B) The construction drawing of the first and second floors.

The perspective drawing of this architecture is given in Fig. 10.9. The construction drawing of this architecture is given in Fig. 10.10.


Conclusion and Future work

In this chapter, the architecture and the method for designing KBSs are proposed. Besides, the processes for constructing the components of those systems are also constructed, especially the KB and the IE. Both the KB and IE, which are built by using the proposed method, can meet the requirements to be applied in real-world intelligent systems. The KB is designed based on gathering information from the knowledge domain. The knowledge has main components, namely concepts, relations, and inference rules. The collected knowledge is represented by using the methods of knowledge representation. Besides, the KB also has some basic knowledge manipulations, such as knowledge-updating, checking the knowledge’s consistency, and unifications of facts. The IE is constructed using the problems of the


Applications of Computational Intelligence in Multi-Disciplinary Research

knowledge domain. Its reasoning process combines reasoning methods and heuristic rules to get the most effective deduction. The designing of KBs and IEs are established bases on the criteria of intelligent systems. The proposed method for designing KBSs is useful for designing practical intelligent systems. It has been applied to build an intelligent problem solver for high school solid geometry problems. The method can represent the knowledge domains therein completely and solve some problems on them. It can represent more than 80% quantity of the knowledge in the selected course and solve 62 out of 83 tested exercises with step-by-step solutions and the reasoning for the answers using appropriate, high-school-level knowledge. Moreover, the built system meets the requirements of an intelligent system that supports learning, and it is emerging to be applied in the real world. Besides, the proposed method is applied to build a consultancy system for designing housing architecture based on the Vietnamese regulations of architecture and construction. This consultancy system has a KB including almost all rules in Ref. [12]. It can consult on some common housing architectures based on the requirements input by the user. Those architectures match with the government regulations. In the future, some methods for knowledge management will be studied deeper [48]. They will be able to process the completeness and consistency of a KB in an intelligent system when it is updated. Nowadays, a real-world knowledge domain is integrated from many sources [49,50]. Thus a method for integrating many knowledge domains would be studied to organize the KB closer to reality [15,51,52]. Especially, the designed method will be applied to build some practical applications in other real-world domains, such as recommendation systems in healthcare, public administration services, intelligent searching on knowledge for building intelligent searching chatbots [53].

References [1] R. Akerkar, P. Sajja, Knowledge-Based Systems, Jones & Bartlett Learning, 2010. [2] F. Harmelen, V. Lifschitz, B. Porter, Handbook of Knowledge Representation, Elsevier, 2008. [3] N.V. Do, Intelligent problem solvers in education: design method and applications, in: V.M. Koleshko (Ed.), Intelligent Systems, Intech, 2012, pp. 121148. [4] M. Chein, M. Mugnier, Graph-Based Knowledge Representation—Computational Foundations of Conceptual Graphs, Springer-Verlag, London, 2009. [5] W. Ertel, Introduction to Artificial Intelligence, Springer, 2011. [6] S. Russell, P. Norvig, Artificial Intelligence—A Modern Approach (, third ed., Pearson, 2016. [7] Vietnam Ministry of Education and Training, Textbook and Workbook of Geometry in High School, Publisher of Education, 2019. [8] J.F. Sowa, Knowledge Representation: Logical, Philosophical and Computational Foundations, Brooks/Cole, 2000. [9] S.P. Kumar, Knowledge-based expert system in manufacturing planning: state-of-the-art review, International Journal of Production Research 57 (1516) (2019) 47664790. [10] H. Yoshizumi, K. Hori, K. Aihara, The dynamic construction of knowledge-based systems, in: C.T. Leondes (Ed.), Knowledge-Based System, 2, Elsevier, 2010. [11] H. Nguyen, D. Nguyen, V. Pham, Design an intelligent problems solver about solid geometry based on knowledge model about relation, in: Proceedings of 2016 IEEE International Conference on Knowledge and Systems Engineering (KSE 2016), Ha Noi, Vietnam, October 2016, pp. 150155. [12] People Committee of Ho Chi Minh City, The regulations of the housing architecture in Ho Chi Minh city, Number of regulation 135/2007/ UBNDTP on 8 December, 2007. [13] H.D. Nguyen, N.V. Do, N.P. Tran, X.H. Pham, V.T. Pham, Some criteria of the knowledge representation method for an intelligent problem solver in STEM education, Applied Computational Intelligence and Soft Computing 2020 (2020). Article ID 9834218. [14] R. Plant, R. Gamble, Methodologies for the development of knowledge-based systems, 19822002, The Knowledge Engineering Review 18 (01) (2003) 4781. [15] H.D. Nguyen, N.V. Do, V.T. Pham, A. Selamat, E. Herrera-Viedma, A method for knowledge representation to design intelligent problems solver in mathematics based on Rela-Ops model, IEEE Access 8 (2020) 7699177012. [16] A. Gunter, C. Kuhn, Knowledge-Based Configuration—Survey and Future Directions, in: F. Puppe (Ed.), XPS-99, LNAI 1570, Springer, 1999, pp. 4766. [17] Y. Fu, J. Zhuang, Y. Chen, L. Guo, Y. Wang, A framework for optimizing extended belief rule base systems with improved Ball trees, Knowledge-Based System (2020)Retrieved from. Available from: [18] C. Ramirez, B. Valdes, A general knowledge representation model of concepts, in: C. Ramirez (Ed.), Advances in Knowledge Representation, InTech, 2012, pp. 4376. [19] Y. Wang, O.A. Zatarain, A novel machine learning algorithm for cognitive concept elicitation by cognitive robots, in: I. Management Association (Ed.), Cognitive Analytics: Concepts, Methodologies, Tools, and Applications, IGI Global, 2020, pp. 638654. [20] A. Felfernig, R. Burke, Constraint-based recommender systems: technologies and research issues, in: Proceedings of 10th International Conference on Electronic Commerce (ICEC 2008), Innsbruck, Austria, 2008, pp. 110. [21] L.V. Naykhanova, I.V. Naykhanova, Conceptual model of knowledge base system, Journal of Physics: Conference Series 1015 (2018) 032097.

A methodology for designing knowledge-based systems and applications Chapter | 10


[22] W. Yang, C. Fu, X. Yan, Z. Chen, A knowledge-based system for quality analysis in model-based design, Journal of Intelligent Manufacturing 31 (2020) 15791606Retrieved from. Available from: [23] C. Tong, D. Sriram, Artificial Intelligence in Engineering Design—3, Elsevier, 1992. [24] D. Oragui, Tacit knowledge: definition, examples, and importance. Retrieved from ,, 2020. [25] J. Bobadilla, F. Ortega, A. Hernando, A. Gutie´rrez, Recommender systems survey, Knowledge-Based Systems 46 (2013) 109132. [26] G. Adomavicius, A. Tuzhilin, Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions, IEEE Transactions on Knowledge and Data Engineering 17 (6) (2005) 734749. [27] Y. Koren, R. Bell, C.H. Volinsky, Matrix factorization techniques for recommender systems, IEEE Computer 42 (8) (2009) 4249. [28] S. Zhang, Cost-sensitive KNN classification, Neurocomputing 391 (2020) 234242. [29] P. Dai, Y. Yang, M. Wang, R. Yan, Combination of DNN and improved KNN for indoor location fingerprinting, Wireless Communications and Mobile Computing 2019 (2019)Article ID 4283857. Retrieval from. Available from: [30] N.V. Do, Ontology COKB for knowledge representation and reasoning in designing knowledge-based systems, Communications in Computer and Information Science (CCIS) 513 (2015) 101118. [31] Y. Wang, et al., Brain-inspired systems: a transdisciplinary exploration on cognitive cybernetics, Humanity and Systems Science Towards Autonomous AI. IEEE SMC Magazine 6 (1) (2020) 613. [32] N. Do, H. Nguyen, A. Selamat, Knowledge-based model of expert systems using rela-model, International Journal of Software Engineering and Knowledge Engineering 28 (8) (2018) 10471090. [33] P. Sajja, R. Akerka, Knowledge-based systems for development, in: A. Sajja (Ed.), Advanced Knowledge Based Systems: Model, Applications & Research, TMRF e-Book, 2010, pp. 111. [34] N. Do, H.D. Nguyen, A knowledge model about relations and application, in: Proceedings of 4th International Conference on Data Mining and Intelligent Information Technology Applications (ICMIA 2012), Taiwan, 2012, pp. 701704. [35] G. Jakus, V. Milutinovic, S. Omerovic, S. Tomazic, Concepts, Ontologies, and Knowledge Representation, Springer, 2013. [36] M. Berman, A Knowledge Representation Practionary, Springer, 2018. [37] R. Brachman, H. Levesque, Knowledge Representation and Reasoning, Morgan Kaufmann, Elsevier, 2004. [38] W. Kupers, Analogical reasoning, in: N.M. Seel (Ed.), Encyclopedia of the Sciences of Learning, Springer, Boston, MA., 2012. [39] N. Do, H. Nguyen, A reasoning method on computational network and its applications, in: Proceedings of 2011 International MultiConference of Engineers and Computer Scientists (IMECS 2011), Hongkong, 2011a, pp. 137141. [40] N.V. Do, H.D. Nguyen, T.T. Mai, Designing an intelligent problems solving system based on knowledge about sample problems, in: Proceedings of 5th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2013), Kuala Lumpur, Malaysia, LNAI 7802, Springer, 2013, pp. 465475. [41] N. Do, H. Nguyen, A reasoning method on knowledge base of computational objects and designing a system for automatically solving plane geometry problems, in: Proceedings of World Congress on Engineering and Computer Science 2011 (WCECS 2011), San Francisco, 2011b, pp. 294299. [42] J. Jiang, J. Zhang, A review and prospect of readable machine proofs for geometry theorems, Journal of System Science and Complexity 25 (2012) (2012) 802820. [43] F. Cheung, J. Kuen, M. Skitmore, Multi-criteria evaluation model for the selection of architectural consultants, Construction Management and Economics 20 (7) (2002) 569580. [44] A. Jabbarzadeh, Application of the AHP and TOPSIS in project management, Journal of Project Management 3 (2) (2018) 125130. [45] H. Pham, N. Do, H. Nguyen, A consulting system for estimating costs of an information technology hardware project based on law of public investment, in: Proceedings of 7th IEEE International Conference on System Science and Engineering (ICSSE 2019), Quang Binh, Vietnam, 2019, pp. 285290. [46] H. Tran, L. Le, Y. Lee, A fuzzy AHP model for selection of consultant contractor in bidding phase in Vietnam, Journal of Construction Engineering and Project Management 5 (2) (2015) 3543. [47] N. Do, H. Nguyen, Knowledge-Based Systems. Publisher of Vietnam National University, Ho Chi Minh city, 2017 (Vietnamese). [48] F. Correa, D. Carvalho, Holistic knowledge management: adherence analysis of the Castillo and Cazarini model, Knowledge Management Research & Practice 18 (4) (2019) 439449. [49] M. Dumontier, C.J. Baker, J. Baran, et al., The Semanticscience Integrated Ontology (SIO) for biomedical research and knowledge discovery, Journal of Biomedical Semantics 5 (2014) 14. [50] R. Salakhutdinov, Integrating domain-knowledge into deep learning, in: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, Alaska, 2019. [51] N. Do, H. Nguyen, T. Mai, A method of ontology integration for designing intelligent problem solvers, Applied Science 9 (18) (2019) 3793. [52] S. Fratini, T. Nogueira, N. Policella, Integrating modeling and knowledge representation for combined task, resource and path planning in robotics, in: Proceedings of the Workshop on Knowledge Engineering for Planning and Scheduling—27th International Conference on Automated Planning and Scheduling (ICAPS 2017), Pittsburgh, PA, 2017. [53] H. Nguyen, V. Pham, D. Tran, T. Le, Intelligent tutoring chatbot for solving mathematical problems in high-school, in: Proceedings of 2019 11th IEEE International Conference on Knowledge and Systems Engineering (KSE 2019), Da Nang, Vietnam, Oct., 2019, pp.145150.

This page intentionally left blank

Chapter 11

IoT in healthcare ecosystem Poonam Gupta and Indhra Om Prabha M G H Raisoni College of Engineering and Management, Pune, India



The healthcare ecosystem is going through a paradigm shift as IoT finds its way into the healthcare domain. The shift however is made smooth by the fact that IoT leverages the existing pervasive presence of the connected objects and networked resources in the medical platform. IoT helps to explore new dimensions of patient care through remote care, faster disease diagnosis, proactive treatments, and posttreatment care [1]. It has also increased patient engagement with doctors as the relationship between the two parties is made more efficient and easier with help of IoT. Currently, the major operational bottlenecks faced by healthcare systems are understaffed hospitals, the increasing cost of sophisticated healthcare services, and physicians unwilling to relocate to rural places [2,3]. Along with this, the rise in the aging population who need remote care is also a growing concern in many countries. Additionally, another new challenge is the novel coronavirus disease 2019 (Covid 19), which adds strain to the already-strained medical field. The IoT revolution achieved by the confluence of technology and innovations by scientific researchers provides solutions for these healthcare needs. Besides, as IoT leverages upon the alreadyexisting medical infrastructure and network, setting up an IoT-enabled healthcare system proves to be viable for most clinical providers. The doctor-to-patient ratio is always less than the WHO-prescribed value of 1:1000 in most countries [4,5]. For instance, in India, there is 1 doctor for every 1445 citizens; under such conditions, smart healthcare comes in handy. Telehealth reduces the number of patients that visit the hospital and enables the diagnosis of diseases done with the help of smart devices without the intervention of a physician [6], postdischarge check-ups, and monitoring of patients remotely [1,7]. All these are nowadays possible with the modern healthcare services provided by the IoT revolution without the need of a physician. The healthcare industry involves a high cost with high consultation charges that constitute managing and providing facilities for the on-premises services. Access to a few sophisticated devices and diagnostic procedures still cost a fortune. The healthcare insurance premiums have skyrocketed in recent years, indicating the high-priced healthcare services. With the advent of IoT in the medical field, the cost of the service is likely to reduce and become feasible. Furthermore, in developing nations, the major challenge is providing healthcare to its rural population [2,3]. Healthcare is the right of every citizen; providing equal healthcare regardless of where the patient is located must be inevitable. However, the current situations lack quality infrastructure and involve the lack of access to basic medicines and a dearth of qualified medical professionals in rural areas. In a country like India where 65% of the population is based in villages, a lack in the reach of healthcare facilities to its rural areas is considered a major failure. IoT proves to be a promising technology to bridge this gap. IoT offers low-cost remote health monitoring and ensures the easy availability of healthcare devices, thus making healthcare affordable, available, and accessible as dealt within the article in Ref. [3]. Nowadays, the aging population is one of the biggest concerns encountered by the healthcare industry in many countries. With the elderly population increasing rapidly around the world, they seem to constitute a substantial part of society. It is predicted that by 2050, people above the age of 65 years will double, with the life expectancy data portraying a growth trend. This challenge is to be handled by all nations regardless of the socioeconomic status of Applications of Computational Intelligence in Multi-Disciplinary Research. DOI: © 2022 Elsevier Inc. All rights reserved.



Applications of Computational Intelligence in Multi-Disciplinary Research

the country. IoT provides a cost-effective and good solution for meeting this challenge and improving the quality of life that can be provided to elderly people. Some of the prominent issues faced by elderly people are isolation from society, unpredicted falls [8], and adherence to medication [911]. Unpredicted falls are the second largest cause of death in the world. Adults aged above 65 years suffer the greatest number of fatal falls or falls that cripple them for the rest of their lives. In Ref. [8], an IoT-based device-type invariant fall detection system has been proposed as a solution that is supported by IoT technology incorporating accelerometer, GPS, and Wi-Fi into any of the devices such as smartphones, Arduino, or Raspberry Pi boards. A simple design is made with a clientserver architecture. The continuous motion data is fed to the server or the cloud where the pattern is studied. And if any abnormality in the data is sensed, alerts are sent immediately to the connected healthcare facility. A step further, IoT can monitor and report whether the patients follow the medications as instructed. Refs. [911] explain how adherence to medication can be tracked and monitored. Loneliness can also be combated with the help of IoT-based robotics. This can be friendly and assist the elderly in day-to-day activities, respond to commands, and act as a companion. The chapter is organized as follows. Section 11.2 handles the different applications of IoT in the healthcare landscape. Section 11.3 deals with the path-breaking implementation methodologies of IoT in healthcare. This is followed by decoding the architecture of a couple of IoT models in Section 11.4. Section 11.5 analyzes the challenges that are faced by the IoT in healthcare. In Section 11.6, security issues and defense mechanisms are discussed. Section 11.7 elaborates on IoT’s role in fighting the Covid 19 global pandemic. Finally, Section 11.8 deals with the future of IoT with the inception of 5G and the outbreak of the synergy between artificial intelligence (AI) and the data generated by IoT, followed by the conclusion where the future face of IoT is discussed.


Applications of Internet of Things in healthcare

IoT has a gamut of applications in the healthcare domain. This section classifies the applications into two broad categories: patient-centric and hospital-centric applications. Patient-centric applications deal with remote patient monitoring and care, pathology detection, emergency patient care, and body wearable devices. Hospitalcentric applications deal with the efficient use of hospital resources and the deployment of healthcare professionals. IoT also helps the pharmaceutical and medical insurance fields which also come under the umbrella of healthcare.

11.2.1 Patient-centric IoT There is a growing demand for healthcare professionals and hospitals due to the rise in chronic disease and the increase in the number of the aging population. These challenges are met with advancements in technologies like IoT, AI, and data analytics and employing them for the benefit of healthcare. Remote patient care Remote patient care is a part of telehealth, in which the patients are monitored remotely from their residences without the need to visit the hospital [3]. It is a widely preferred model for the elderly population and for people whose mobility is restricted. G


Elderly people care G Invariant fall detection Elderly people care is a pressing need, with more elderly people staying alone at home. By using a body sensor, like a smartphone or Raspberry Pi, which has to be kept in the pocket of the patient, the movement of the patient can be recorded. The accelerometer data is continuously read and sent to the server in the cloud where a machine learning algorithm is applied to the data and predicts any fall or abnormal motion; as given in Ref. [8], an IoT-based device-type invariant fall detection system. People whose mobility is restricted Quadriplegia patients are patients whose limbs are not functional. For such patients, IoT can help in multiple ways. The Healthcare Robotics Lab at Georgia Tech has designed a robotically controlled bed called Autobed which can be controlled by using a web-based interface [12]. So such beds can be controlled remotely through a web application. Similarly, the usage of smart wheelchairs [13] with IoT is an innovative model that makes life easier.

IoT in healthcare ecosystem Chapter | 11




Heart disease prediction Patients with premedical conditions like high blood pressure can be kept under continuous monitoring [6]. Bodywear like a smartwatch and an electrocardiogram (ECG) that is fixed in the patient’s body reads the data and sends it to the cloud seamlessly. In case of any abnormalities, warning messages are sent to the patient and the immediate caretaker so that they can act immediately. People with a chronic condition

There are cases where the patients are not required to be hospitalized but yet are under constant threat due to chronic conditions. For such patients, IoT solutions come as a boom. For instance, an IoT-enabled heating, ventilation, and air conditioning (HVAC) unit can be installed in the home or the living room of the patient who suffers from respiratory diseases like chronic obstructive pulmonary disease, for whom ventilation and air quality are crucial. The sensors in the HVAC read the pollution level of the air constantly and the smart HVAC device will adjust it back to the desired level as needed [14]. Pathology and fatal viral/bacterial diseases Electroencephalogram (EEG) sensors are fixed at crucial points in patients and the electrical signals are monitored. The EEG data along with facial expressions, speech, movements, and gestures are monitored; by performing data analytics, the occurrence of diseases is predicted, as given in the article in Ref. [15]. Critical and emergency patient care Time, availability, and accuracy of information play a major role when it comes to critical and emergency patient care. In most cases, emergency doctors lack patient history and the information collected during the transit of the patient in the ambulance is mostly inaccurate. In such cases, a centralized database that maintains patients’ data for IoT healthcare serves as a potential reserve. Thereby, the emergency care physician can be well informed about the patient record by the time the patient arrives and will be well equipped with the plan of action. In this way, associated losses are reduced, and emergency healthcare is improved [16]. Food and workout monitoring Apart from critical patient care monitoring, food and workout monitoring is gaining popularity among fitness-conscious people and sportspersons. Wearable fitness trackers which are IoT-based track the physical activity of a person [17]. Wristbands and smartwatches can track the number of steps taken, distance covered, calories burnt, heart points, average speed, body temperature, heart rate, and sleep quality of a person. Affective computing It is the process of recognizing human emotion with the help of computing systems and devices. It is said to be an interdisciplinary field spanning computer science, psychology, and cognitive science. IoT devices help to capture the emotional state of the user. These devices include sensors, microphones, and cameras [18]. Affective computing technologies sense the emotional state of a user and respond by performing specific predefined tasks, such as changing the category of the quiz or recommending a set of videos to align with the mood of the learner. For example, if a patient with quadriplegia who cannot move any of their limbs wants to do a distress call, with the affective computing, the camera senses their facial expression and recognizes the voice, which may be incoherent, and respond to the user’s emotions by performing a predetermined action. Or with the help of AI, the action will be decided with the use of the prediction methodologies. In other words, affective computing can be considered humancomputer interaction where the computer reacts based on the human’s facial expressions, stimuli, gestures, speech, or posture. Furthermore, the temperature changes in the hand recorded by the mouse can also act as an input for affective computing. Thus affective computing uses physiological and emotional changes in the human and predicts the action to be performed.

11.2.2 Hospital-centric IoT applications IoT devices and applications play a major role in optimizing hospital management and operations apart from patient care. Operations management in the hospital is essential to control the costs and improve the quality of service provided to the patients. Hospitals achieve this by employing IoT.


Applications of Computational Intelligence in Multi-Disciplinary Research Real-time location of medical equipment IoT devices tagged with sensors are used for tracking the real-time location of medical equipment like wheelchairs, defibrillators, nebulizers, oxygen pumps, and other monitoring equipment [19]. Deployment of medical staff Medical staff deployment can be monitored with the help of IoT. The movement of the hospital staff and doctors can be kept track, and in case of an emergency, the nearest doctor to the location can be referred to immediately, thereby gaining on the time it takes for the doctor to reach the critical patient within the hospital premises [19]. Drugs management The management and governance of drugs is a major challenge in the medical sector. Apparently, there are high chances of counterfeit drugs replacing genuine medicines. And there are many drugs that are sensitive to parameters like temperature and humidity. These are managed by implementing IoT and blockchain technology [20]. For instance, some drugs that are temperature sensitive are kept in refrigerators or customized medical boxes, in which the temperature and humidity are maintained as recommended by the drug manufacturer [21]. Reducing the charting time Patient charting is one of the important works of doctors and is also a considerably time-consuming activity. Patient charting keeps the complete information of the patient on file, including demographics, diagnoses, medications, vital signs, allergies, lab/test results, treatment plans, immunization dates, and progress notes. IoT provides a solution for this by recording the voice commands of the physician. It makes the patient’s data readily accessible for review, thereby saving 15 hours per week of doctors’ time.

11.2.3 IoT benefitting health insurance companies Insurance companies can be benefitted from the data that is collected by the healthcare industry. They can leverage this data for performing underwriting tasks and claim verification. They can easily deduct fraudulent claims as the medical history of the person holds all the necessary data. Thus data captured by IoT devices bring transparency between the client and insurance providers. All the insurance processes like underwriting, pricing, claims, and settlements are made easy with the data being transparent.

11.2.4 Pharmaceutical governance The pharmaceutical industry always faces threats posed by parallel counterfeit drug manufacturers. Apparently, the infusion of fraudulent drugs in the place of legitimate drugs has increased in the past decade. It is a huge challenge to identify and curb this parallel counterfeit drug supply chain as their existence is completely invisible. IoT, along with blockchain, has helped the pharmaceutical industry in addressing this challenge by enhancing the traceability of medicines in the pharmaceutical supply chain [20]. The drugs are tracked from the source till they reach the customers without any breach or intrusion.


Implementation methodologies

The standard implementation of the IoT model involves the IoT devices/sensors directly connected to the cloud, and hence the data generated by the sensors would be directly sent to the cloud. The storage and processing of data are done on the cloud. However, with the growing rate of IoT and innumerable devices getting connected to the cloud, this model started to experience several gridlocks like the storage space, processing speed, and mainly bandwidth. This made way to the concepts of fog and edge computing, which take processing closer to the source, where the data is actually created, thereby filtering only the required and already-processed data to be sent to the cloud and eliminating unwanted data at the edge or fog layer itself. By doing so, the processing speed is improved, storage space is optimized, and bandwidth is saved.

IoT in healthcare ecosystem Chapter | 11


11.3.1 Fog computing In fog computing, there is only one centralized computing device that processes data from different endpoint IoT devices [22,23]. On the other hand, edge computing has all the nodes processing data. Fog computing ensures to take the processing power and data analytics to the local area network or the IoT gateway. Architecture The architecture of fog computing is shown below (Fig. 11.1). The fog computing architecture consists of three main layers:

Smart IoT devices/applications

These are the devices like sensors and actuators from the source that generate data and transmit to the fog network and eventually to the cloud. Fog nodes The fog nodes collect data from multiple sources and processes in real-time. They are capable of handling multiple protocols. Then the fog nodes run IoT-enabled applications for real-time analysis, and their response time is often a millisecond. Cloud The cloud receives and puts together the periodic data summaries sent by the different fog nodes. It also performs analysis on the data to gather business insights and then sends new application rules based on those insights to the fog nodes. Fog computing is responsible for analyzing the most time-sensitive data sent by IoT devices. Essentially, the fog nodes that are closest to the sensors look for problems or signs and then prevent them from sending signals to the actuators. Data that can wait only a few seconds or minutes is sent directly to the fog node for analysis and action. Data that is much less time-sensitive is sent directly to the cloud for long-term storage or historical analytics. Fog nodes also send data periodically to the cloud for storage. By effectively using local data and only sending selective data to the cloud, fog computing is able to save bandwidth. In Ref. [23], reliable data transmission and rapid data processing speed are considered crucial aspects of Fog Computing, and a framework is developed to improve them. A reduced variable neighborhood search-based Sensor Data Processing Framework is developed to enhance the reliability of data transmission and processing speed in fog computing. FIGURE 11.1 Standard implementation of a fog computing model.


Applications of Computational Intelligence in Multi-Disciplinary Research Advantages G G G

Low latency: As the processing nodes are closer to the user, the latency in fog computing is negligible. High Security: Security is handled by multiple nodes. No issues related to bandwidth as the data is not sent in bulk.

11.3.2 Edge computing Edge computing provides intelligent services by leveraging local computing and the local edge devices [24,25]. The local edge devices could be routers, smartphones, PCs, or ATMs. Architecture Below is the architectural diagram showing the different layers of edge computing. The two layers between the IoT devices and the cloud layer constitute the edge components. The edge nodes communicate with the edge IoT gateway. For instance, if a nuclear reactor sends the temperature every second and the cloud-based analytical application requires the consolidated data for a minute, the seconds data is stored and processed in the edge node and only the minute data is sent to the cloud (Fig. 11.2).

IoT nodes/applications

This includes all the devices and applications that generate data at regular intervals. The IoT devices, in some cases, act as edge nodes as well. Edge nodes The edge nodes can be separate nodes or can be the same IoT device. The edge nodes are responsible for data generation, collection, preprocessing, and analytics. They are connected to sensors, actuators, and smart devices. These edge nodes help to decentralize data processing and lower the dependence on the cloud.


The cloud receives the periodic data from the gateway, and data analytics is performed on the data. In Ref. [24], the paper proposes the BodyEdge framework, which implements edge computing with a three-tier architecture: IoT edge devices, the edge gateway, and the cloud layer. BodyEdge consists of a mobile client module, an IoT edge gateway, that supports multiradio and multitechnology communication to collect and locally process data from different scenarios. The proposed model has been proven to reduce the data traffic toward the Internet and showed increased processing time. Advantages G G

Shorter response time as the data is processed locally Preprocessing and filtration of the data. FIGURE 11.2 Standard implementation model of edge computing.

IoT in healthcare ecosystem Chapter | 11



Aggregation of the data to be sent to the cloud Reduced bandwidth usage for transmitting data to the cloud Low operating cost as the data to be managed by the cloud is reduced Lesser delays as the processing is split into multiple nodes Improved performance due to low latency Unlimited scalability Conserving network. Empowering edge computing Edge nodes are capable of real-time processing of the data. If more intelligence can be added to the edge nodes, then instead of relying on the cloud, the edge device can perform data analytics on its end. For instance, if a deep learning architecture like a convolution neural network can be added to edge computing, the data analytics and decision-making can be brought even closer to the source, thereby reducing the latency and saving the cost and bandwidth. Real-time outcomes can be seen in ATMs, which act as the edge device and transmit every transaction to the cloud. With the help of a deep learning architecture embedded in the ATMs, it becomes possible for the ATM cameras to detect if someone is accessing the ATM while wearing a cap or a helmet. Instead of relying on the cloud for processing, which could take some time, real-time processing can be done at the ATM itself. If analytics can be brought to the edge devices, it becomes flexible to scale up and modify the analytics at the node level, as compared to the cloud, without affecting the seamless operation of the device.


Implementation models

Two implementation models of IoT in healthcare are discussed in detail in this section, which provides an insight into the design from end to end. It explains the way data is collected from the sensors attached to a patient’s body and transferred to the cloud and finally to the repository, where the exhaustive analysis of data happens and results are predicted.

11.4.1 Heart disease prediction IoT’s breakthrough in heart disease prediction is a niche in the medical landscape. Heart disease is the major contributor to medical deaths. Such heart diseases can be tracked and monitored by implementing IoT. In Ref. [26] the author has proposed a novel system using a modified deep convolution neural network (MDCNN). This system involves two sensors, namely a smartwatch and a heart monitoring device, that are attached to the body of the patient that collect and relay the data to the MDCNN. They monitor the blood pressure and ECG metrics of the patient. The data collected is then subject to deep learning analysis. There are three steps that the data goes through before the prediction is done: (1) preprocessing, (2) feature selection, and (3) classification using the MDCNN. The accuracy of this method is 98.2%, which is better than that of other deep learning and logistic regression methods. The IoT design of the framework is shown in Fig. 11.3. The data is collected from the wearable devices worn by the patient, namely the smartwatch and the heart monitoring board AD8232. These read the patient’s resting blood pressure and ECG data. The smartwatch used is Omron HearGuid-bp8000m, which is capable of sending data to the Omron cloud and thereby sending alert messages to the mobile numbers of the concerned people when the readings go

FIGURE 11.3 IoT framework of the heart disease prediction model.


Applications of Computational Intelligence in Multi-Disciplinary Research

abnormal. The data collected from the IoT sensor devices are captured and processed using the Raspberry Pi board. This data is then sent through a secured communication low-range (LoRa) gateway and to the LoRa cloud. LoRA is a low-range spread-spectrum-modulated technique where packets can be sent over the range of a few kilometers. It is a low-power protocol. It requires low power, and hence the battery can last year long. The longevity of the battery is achieved by powering up and down the battery. The LoRA transmitter is powered up only when the packets are sent instead of keeping them always powered like a Wi-Fi radio. The low-range protocol is limited to transmit only a few hundred bytes per transmission.

11.4.2 Healthcare IoT-based affective state mining using deep convolutional neural networks Besides predicting the physiological conditions of the people, IoT applications are also used to predict the psychological conditions of the people. IoT sensor devices record psychological observations with the help of electromyography (EMG), ECG, and electrodermal activity (EDA) medical sensors [18]. Electrodermal activity EDA refers to the property of continuous changes in the electrical characteristics of the skin. Sweat glands are controlled by the sympathetic nervous system; when it is aroused, the skin starts to sweat. Thus the electric (conductance or resistance) property of the skin can be used to measure the emotional and sympathetic responses of humans. Electromyography EMG is a technique of measuring the electrical activity produced by skeletal muscles. The signals generated can be used to determine the biomechanics of human movement. They are used to discriminate valence levels. Electrocardiogram ECG is a technique in which the electrical activity of the heart is read using electrodes placed on the patient’s body and the graph is recorded. The data collected from these IoT devices are analyzed through a deep convolutional neural networks (CNNs) to determine the covert affective state of human emotions. In the study in Ref. [18], the authors have studied five basic emotions: happy, sad, relaxed, disgusted, and neutral. The paper argues that conventional emotion-sensing uses facial recognition, speech recognition, mood analysis by cognitive skills, and sentiment analysis from social networking sites, which are bound to have drawbacks. In facial recognition, there is a chance that the person fakes their expression; however, their covert emotion may be different. Speech does not work for continuous emotion analysis, and similarly, an individual may not be associated with social networking sites all the time. Hence the authors suggest an innovative method of analyzing the emotions from the physiological state of the body by reading the electrical signals produced by crucial parts of the body as these form the grassroots of any emotion that the human reflects. Internet of Medical Things (IoMT) sensors, such as EDA, EMG, and ECG, will extract the arousal and valence of the user. The challenge of mapping these readings with human emotions is done with the help of machine learning processes. The framework, referred to as IoMTbased Affective State Mining (IASM), collects the combined data from all three sensors and predicts the affective state of mind. The collection of data from the sensors having varied parameters and converting the physiological reading to a psychological pattern is done with help of a series of machine learning procedures. The signal peaks and index are identified through a state machine design. A number of features are extracted from the ECG, and the graph with peaks is recorded. Similarly, the skin conductance level is measured with a number of features, and the graph of power spectral density is drawn against the happy and disgusted emotions. The joy or happiness and the valence levels are the measured features extracted by the EMG, like mean, SD, integral (zygomatics major activity, ZMA). The ISAM framework has shown 90% accuracy in determining the sad emotion. In general, it has achieved an 87.5% classification accuracy compared to 72% of the radio frequencybased emotion analyzer and 82.9% of the support vector machine (SVM)-based classifier.

IoT in healthcare ecosystem Chapter | 11



Challenges in healthcare IoT

Every technology has its share of challenges. This section discusses the challenges of IoT in healthcare under three categories: technology-oriented challenges, general people’s outlook about remote healthcare, and security issues of maintaining the patients’ data and the real-time data as it involves the life of patients.

11.5.1 Technology-oriented challenges Internet disruption is the main issue faced by the healthcare IoT industry. The Internet has to be seamlessly connected with maximum bandwidth; any interruptions may lead to adverse effects. The following are the technology-oriented challenges faced by healthcare IoT. Risking the patient’s life Internet disruptions can pay a very heavy price since the life of the patient is on the line in the healthcare domain. Hence zero tolerance toward the Internet connectivity problem has to be ensured. An exhaustive test on the load, network bandwidth, and latency must be done for the worst-case scenarios in both mobile and web applications. And the peak load must be tested. IoT crashes are unacceptable when life-saving sensitive equipment is involved. For instance, a smart insulin pump used by patients releases insulin based on the current insulin level of the patient. It is considered a digital pancreas. Such equipment has no window to malfunction [27]. Incorrect results Apparently, the stability of the user’s Internet connection may distort the test results. In case a patient’s smart thermometer or inhaler faces a loss of connectivity or bandwidth issues, the receiver is likely to get an erroneous reading. To circumvent this, it is ensured that the readings are copied into local storage before being relayed to the destination. No planned downtime There is no place for planned downtime in IoT devices as it is seamlessly connected and relays data. So patches and updates need to be integrated alongside the device’s operation. Additionally, the patches should not tamper with sensitive data. Need for a specialized tool to handle diversified protocols As the IoT protocols lack standardization, different IoT devices follow different protocols. MQTT is the predominantly used protocol as it handles low-bandwidth and low-memory devices. However, a few devices also employ HTTP, CoAP, XMPP, and many others. To enable communication between different devices with different protocols and with the servers, a fitting load test tool is brought into the picture by the performance test engineers. This step is considered as an overhead. Remote places with a lack of infrastructure and connectivity In developing and underdeveloped nations, the main challenge would be the infrastructure to get connected to the web [2]. In such cases, the remote healthcare service is a far-fetched concept.

11.5.2 Adapting to remote healthcare and telehealth People have to adapt to embrace remote healthcare. Still, the majority of the population holds inhibition toward remote healthcare and only trusts in personally visiting the hospital and doctors. However, the Covid 19 pandemic, on a positive note, has increased the number of people using remote healthcare. As the hospitals were closed for regular patients and illnesses, people had to switch to telehealth, and the records show that an overwhelming number of them are comfortable with it and will continue to use it in the future.

11.5.3 Data security Data security in healthcare IoT is very much in the growing stage. Any compromise in the data will expose the patients’ safety and privacy to undue risk. [2830]. The security vulnerabilities can be exploited by hackers with vested motives.


Applications of Computational Intelligence in Multi-Disciplinary Research

The patients’ primary concern is that their health records be confidential. The patient health records maintained centrally by the hospital can become a viable target for cyber attacks. Many kinds of attacks, such as storage attacks, nodal attacks, and network attacks, are possible in IoT due to its complex design. An IoT framework involves lots of devices and interfaces, i.e., the sensor devices that communicate with themselves and with the storage node, with a gateway, and eventually with the cloud, and therefore cyber attacks are possible at various levels. Medjacking is the term that refers to medical device hijacking [31]. If a hacker gains access to a single device connected to the network, the hacker can control the network. The hacker also gets access to control the dosage of the drug to be given to a patient, which could also prove lethal. The next section deals with the security threats and the different ways of combatting them.


Security issues and defense mechanisms and IoT

The crux of IoT is to bring all possible devices to be connected with the Internet seamlessly and make data transmission and real-time data analytics a standard practice. However, with it comes a great deal of security risk. In healthcare, the accuracy and privacy of the data are pivotal. There always exist security threats in terms of invasion of privacy, unauthenticated access, denial-of-service attack, man-in-the-middle attack, remote hijacking, and malware botnets. The three principles of security, namely confidentiality, integrity, and authenticity, must not be compromised.

11.6.1 Security requirements in healthcare IoT Confidentiality It ensures the privacy of the information being passed from the sender to the receiver in addition to protecting the user’s identity and protecting from external intrusion. The data passes through different nodes, namely storage nodes, processing nodes, the IoT Gateway, and the cloud. There are several interfaces and gateways through which the data must be transferred without compromising confidentiality. The risk is the disclosure of sensitive data or the sensitive data landing in neighboring nodes or getting in the hands of unauthorized users. Integrity Integrity is the method of ensuring that the data is real and accurate: the data received at the destination must be as sent by the source. There should not be any unauthorized deletion, modification, or insertion of data. There are viable chances of the data being tapered by malignant users and significant errors due to channel imperfection or electromagnetic disturbances. It is inevitable that IoT uses secure media and interfaces. Authentication Authentication is responsible to validate the identity of the source and the destination and that they are legitimate users. Unauthorized gain of access to data by malignant users is a major threat. Tag cloning, radio frequency identification (RFID) eavesdropping, and spoofing are some of the common authentication attacks.

11.6.2 Attacks on IoT devices Some of the common attacks that happen on IoT devices are discussed below. A plethora of security attacks is possible on IoT devices. A few are mentioned in this section. Sinkhole attack A sinkhole attack is a network layer attack. In this attack, a compromised or malicious node broadcasts that it is the next best hop for all the nodes present in the network, thereby rerouting all the data to it. Since it will never drop a packet, the presence of a sinkhole will remain unidentified. Such a node poses a huge threat to the data. Since it sinks all the data in the network, it is called a sinkhole attack. Blackhole attack In this attack, the compromised or the malicious node drops all the packets routed through it, causing data loss and the eventual stoppage of the data traffic. It is also a network attack.

IoT in healthcare ecosystem Chapter | 11

197 Selecting forwarding attack (grayhole attack) This is a variant of the blackhole attack. In this attack, the compromised node does not drop all the packets. It selectively drops the packets and hence remains undetected by the intrusion detection system (IDS) of the network. Wormhole attack A wormhole attack is said to be a deadly attack. In this attack, two malicious nodes occupy a strategic position in the network and form a tunnel between them. The data received by one node is transferred through the tunnel to the other node. It is capable of recording all the data that passes through the network. Sybil attack This is a type of network attack where a single node carries multiple identities and thereby causing chaos in the network. Sybil attacks result in contradicting routing paths and hence the data packets get lost, redundant, or disrupted. The effectiveness of fault tolerance is reduced. Denial-of-service attack A denial-of-service attack floods the network traffic with useless data and thereby making the network unavailable to the authentic user traffic. This attack can happen both at the network and application levels.

11.6.3 Defensive mechanism A standard set of protocols is followed to protect the system from cyber attacks, which includes IDS, firewall, encryption, antivirus, regular software updates, and access management. Currently, blockchain technology, which is considered state-of-the-art technology, is also implemented in healthcare IoT. Fault tolerance is also one of the essential functions that are incorporated in the healthcare domain. Key management Key management plays a major role in cryptography. All the data passed through the network must be encrypted. The process of cryptography, which involves key generation, sharing of keys, and encoding and decoding using public or private key encryption, should be done to ensure that no malicious activity happens during data transfer from the source to the destination. User/device authentication and authorization During authentication and authorization, every user or device is given a unique ID and login. When the device enters the network, the login of the device has to be authenticated with the right credentials. Thereby, no malicious attacker can easily hamper the network or any device connected to the network. Authorization is about access rights and control. Each node and each user are given only the required level of access to information. A server that monitors doctors’ in and out times is not given access to patient’s details. The access rights and privileges are predetermined and set, and they are associated with the login credentials. Intrusion detection An IDS is installed in a network to monitor any abnormal activity and send information to the administrator to block the malicious IP address [29]. Sinkhole, blackhole, and grayhole attacks, all happen with a compromised network node. The functioning of an IDS is based on the following G G G G G


It identifies the sign of an intruder. It provides information about the location (i.e., suspected, IP address) of the intruder. It logs the information of ongoing activities. It tries to stop malicious activities if they are detected. It then reports the information of malicious activities to the administrator (i.e., intrusion behavior that is either an active attack or passive attack). It also provides information about the types of intrusion (for example, which types of attack_mirai or echobot).


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 11.4 Implementation of fault tolerance in IoT nodes. Fault tolerance The main feature of the fault tolerance functionality is to check the completeness of the data. If any data is lost during transmission, the fault tolerance mechanism retrieves the data. In fog computing, the storage nodes and processing nodes are different. There are a number of storage nodes connected to a processing node with different relay nodes in between, as shown in Fig. 11.4. It is likely that some relay nodes do not function, due to node outage, malfunctioning, or physical destruction; under such scenarios, the storage node has to choose another path and connect with the processing node. Hence all the storage nodes must have more than one potential path to the processing node. In this way, the system need not wait for the self-recovery of the broken node and instead reroutes the packages through a different node [30]. When there is a possibility of rerouting the data packages, the system should be equipped to trace back to the source. This is achieved by designating a unique index for every storage node and attaching a timestamp to the data packets. Every data packet will carry the index of the storage node along with a timestamp. Thereby, when a processing node receives a data packet, it can relate the source, and in case there has been rerouting, with the help of the additional index and the timestamp, it can trace back the source. And if the data packets received are not in order, with the help of the timestamps, the sequence of the data packets can be retrieved. Blockchain technology Blockchain Technology can be referred to as an electronic ledger. All the transactions made by the users or devices that are connected to the blockchain are recorded and maintained in the ledger. It was initially used for cryptocurrency i.e. bitcoin ledger maintenance. The same technology is now used in healthcare IoT. As mentioned in Ref. [32], a private blockchain network is created within the healthcare IoT network to provide another level of authentication and security. The patient’s data along with their medical history is fed to the system; the patient is thereby not required to carry a hardcopy of the reports. The data is completely decentralized to avoid failure and data loss and to provide faster recovery. Whenever a change is attempted by a third person in the database, a notification is sent to all the members of the group. The data is transparent, so any modification done by any member of the group will get notified to everyone. An additional level of security can be provided for data transfer based upon a mutually agreed security parameter, and any modification in the database is accepted upon approval of all members.


Covid 19—how IoT rose to the global pandemic

The world was swept by an unprecedented challenge in the form of Covid 19, the outbreak of which pushed the entire humankind into a pandemic situation [33]. Although widespread research has been happening in the healthcare domain with the help of the latest advancements in IoT and AI, still, science fell short by huge a margin. It was in this situation that the reality came to light that only 5% of the work that is delivered using IoT is on smart healthcare Fig. 11.5. However, IoT did find explosive growth in other industries such as manufacturing, smart cities, smart energy, and wearable consumer electronics. The figure shows the global share of IoT projects in various segments. The research conducted by IoT Analytics showing the 2018 rankings can be found in Ref. [34].

IoT in healthcare ecosystem Chapter | 11


FIGURE 11.5 Top IoT segments. IoT Analytics.

11.7.1 About Covid 19 Covid 19 is an acute respiratory disease caused by a novel coronavirus, namely SARS-CoV-2, transmitted in most instances through respiratory droplets, direct contact with affected cases, and contaminated surfaces/objects. Though the virus survives on environmental surfaces for a varied period of time, it gets easily inactivated by chemical disinfectants [33]. The deadly fact about Covid 19 is that a patient can be infected but may not show symptoms during the first 2 weeks; however, they can still spread the disease. This asymptomatic spreading is a challenge for which social distancing was found to be the most viable solution. The fear of community spread and the outbreak of the disease to a larger population will jeopardize healthcare facilities as there will be a huge gap between the demand and supply. IoT helped to handle most of these challenges effectively [35]. Several government initiatives were carried out that involved IoT. A few of such initiatives are mentioned below.

11.7.2 Decoding the outbreak and identifying patient zero IoT, with the help of a geographic information system (GIS), can help in leading the way to the patient zero or identifying and stopping community spread. The data from the infected patients’ mobile phones, when overlaid with the GIS, will help in tracking movement and provide two-way information. On the upstream, patient zero can be detected by doing upstream research on how the patient caught the infection, where all they had traveled to, whom all they met, etc., and on the downstream, it will help in identifying who all comes in contact with the infected person.

11.7.3 Quarantined patient care Although IoT is very much integrated with telemetry through which patients’ crucial biometric data like heartbeat and blood pressure are recorded and monitored, Covid 19 posed new challenges. Anyone could be affected by the disease: a person without any history of medical conditions or someone who is not wearing any IoT enabled body wearables. In such cases, the monitoring of patients gets difficult. Due to the crunch on the number of frontline healthcare workers and the risk of cross-infection with the patients, drones are used. A drone is made to fly to the quarantined patients’ residence, and all the patient needs to do is to come


Applications of Computational Intelligence in Multi-Disciplinary Research

to their balcony, and the drone will read the temperature with an infrared thermometer Fig. 11.6 and upload the data to the cloud [34]. The patient can update their further conditions using their mobile phone and the collective data can be used to determine the current condition of the patient.

11.7.4 Public surveillance Drones are employed for public surveillance to ensure quarantine, the effectiveness of curfews, and social distancing across the globe [34]. Gathering in groups is a potential threat, and drones enabled with IoT sensors are used to check on such gatherings. Drones are used to screen people to determine whether they are wearing masks or are exhibiting abnormal conditions as well as record their temperature when they are in a crowd.

11.7.5 Safeguarding hygiene A new start-up, Soapy Care [36], has developed what they call “smart sinks” Fig. 11.7. With the tag line “Your hygiene is in your hands,” these standalone microstations (kind of like water coolers) use IoT sensors to dispense the exact amount of soap or sanitizer that should be used, as well as the amount and temperature of water needed to properly wash your hands. With different settings recommended for home, restaurants, and day care or senior centers, these handwashing stations ensure that no one takes any shortcuts on their, and ultimately your, hygiene. Soapy Care’s systems can already be found in the United States, Africa, and Asia, and since the Covid 19 outbreak, their phones have not stopped ringing.

11.7.6 IoT and robotics To reduce the efforts of overstrained frontline healthcare workers, contagion robots Fig. 11.8 are used to deliver multiple services at hospitals: G G G

Delivering food and medicines to the patients Monitoring patients’ body temperatures Disinfecting the designated areas FIGURE 11.6 Drones to monitor patients and deliver medicines.

FIGURE 11.7 Smart sink.

IoT in healthcare ecosystem Chapter | 11


FIGURE 11.8 Robots.

FIGURE 11.9 Disinfectant tunnels.


Providing a means for video conferencing between doctors and patients Detecting people without masks on.

11.7.7 Smart disinfection and sanitation tunnel Disinfectant tunnels have been placed in public places, where a person would walkthrough a tunnel that is enabled with an IoT sensor-based disinfectant spray. The right proportion of the disinfectant chemical is sprayed with the required amount as the person passes through the tunnel, thereby killing the virus if the person has made contact anywhere by any chance Fig. 11.9. Drones have also been also used to spray disinfectants in places that cannot be reached by people or where healthcare workers are in shortage. Going forward, IoT in the healthcare domain will see a mammoth boost after the Covid 19 crisis has exposed inefficient processes and technology bottlenecks. The organizations that came up with ad hoc fixes would want to streamline their inventions in the future.

11.7.8 Smart masks and smart medical equipment Wearing a mask will prevent the spread of infection and prevent individuals from contracting airborne infectious droplets. Masks that are designed with some intelligence are known as smart masks [37]. They are built with features to


Applications of Computational Intelligence in Multi-Disciplinary Research

FIGURE 11.10 Smart mask.

inactivate the span of virus-laden aerosols in a bidirectional way Fig. 11.10. They have sensors to assess the health of the patient and some masks are also designed to amplify the voice.


Future of IoT in healthcare

IoT is one of the keystone players that drive the future of healthcare. The future of IoT depends very much on its enabling technologies, one of which is wireless communication. The latest milestone achieved in wireless communication is 5G. It is considered as the benchmark of wireless long distance communication. The combination of IoT and 5G is expected to pave way for explosive growth in the healthcare domain. Another enabling technology is data analytics. AI with its deep learning technology has a promising outlook for IoT in healthcare [38].

11.8.1 IoT and 5G With the rollout of the fifth generation (5G) round the corner, many innovations and blueprints to match IoT for the 5G revolution are in the pipeline. 5G is expected to undermine the full potential of IoT. 5G will spur innovations across many industries, making IoT an integral part of our economy and lifestyle. It is expected that there would be 550 million 5G subscriptions within 2 years of the rollout. 5G is believed to be 100 times faster than 4G, with the key differentiators being low latency and a combination of high-speed connectivity and ubiquitous coverage, which offer the cutting edge of the 5G technology. The current challenges in the IoT healthcare landscape can be effortlessly addressed using 5G. Along with that, it will give a new definition for “Remote Medical Care,” with zero latency. Patient monitoring and critical clinic procedures, where even a split second matters, will be achieved flawlessly with the use of 5G. 5G is capable of streaming 4K videos without buffering. So any application that requires real-time monitoring can get rid of local storage. High-definition videos can be live telecasted without a fraction of a second delay in transmission; in other words, zero latency. We may imagine a surgery happening smoothly with the medical team and the patient thousands of miles apart.

11.8.2 IoT and artificial intelligence Both AI and IoT are booming technologies with inventions happening at a rapid rate. IoT gets a new facelift with every new invention in AI, and vice versa [38]. As the analogy holds, IoT is the nervous system and AI is the brain that controls and acts upon the nervous system; the combination of these two powerful paradigms provides limitless innovations. Machine learning and deep learning, the two powerful pillars of AI, help IoT in a variety of areas. Business intelligence is applied to the huge cloud of data generated and stored by IoT devices. The AI identifies patterns, detects

IoT in healthcare ecosystem Chapter | 11


anomalies in the data that is generated by sensors, and makes predictions with high accuracy. In Ref. [39], patients’ EEG pathology is monitored and sent to the cloud, where deep learning is applied and real-time decisions are made using the classification of the data to determine whether they are normal or require emergency help. Other AI technologies such as speech recognition and computer vision can help extract insights from data that used to require human review. AI applications for IoT enable companies to avoid unplanned downtime, increase operating efficiency, spawn new products and services, and enhance risk management.



It is perceived that IoT in healthcare is still in the budding phase, and there are innumerable developments expected to happen in the near future. With the advent of 5G and more research and development in IoT enabling technologies like wireless communication, cloud storage, and big data analytics, it is expected to have explosive growth. Healthcare IoT is experiencing a growing demand, owing to the current pandemic situation which is redefining healthcare with remote diagnosis, monitoring, and connected devices. Our chapter discussed the different applications and implementation models along with the new fog and edge computing technologies, which outperform the traditional cloud model, the major challenges faced by IoT in the healthcare domain, the security practices, and issues in the IoT platform. The chapter also discussed the general security attacks that happen in the IoT environment. Furthermore, the chapter brought into light how IoT rose to tackle the current pandemic situation and fight against the deadly Covid 19. And finally, the future line of growth of IoT along with its enabling technologies was discussed.

References [1] H. Habibzadeh, K. Dinesh, O. Rajabi Shishvan, A. Boggio-Dandry, G. Sharma, T. Soyata, A survey of Healthcare Internet of Things (HIoT): a clinical perspectiveinIEEE Internet of Things Journal 7 (1) (2020) 5371. Available from: [2] I.K. Poyner, R.S. Sherratt, Improving access to healthcare in rural communities—IoT as part of the solution, in: 3rd IET International Conference on Technologies for Active and Assisted Living (TechAAL 2019), London, UK, 2019, pp. 16, doi: 10.1049/cp.2019.0104. [3] R.K. Pathinarupothi, P. Durga, E.S. Rangan, IoT-based smart edge for global health: remote monitoring with severity detection and alerts transmissionin IEEE Internet of Things Journal 6 (2) (2019) 24492462. [4] , [5] S. Sarkar, R. Saha, A futuristic IOT based approach for providing healthcare support through E-diagnostic system in India, in: 2017 Second International Conference on Electrical, Computer and Communication Technologies (ICECCT), Coimbatore, 2017, pp. 17, doi: 10.1109/ ICECCT.2017.8117810. [6] M. Ganesan, N. Sivakumar, IoT based heart disease prediction and diagnosis model for healthcare using machine learning models, in: 2019 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN), Pondicherry, India, 2019, pp. 15, doi: 10.1109/ICSCAN.2019.8878850. [7] I. Bisio, C. Garibotto, F. Lavagetto, A. Sciarrone, When eHealth meets IoT: a smart wireless system for post-stroke home rehabilitationinIEEE Wireless Communications 26 (6) (2019) 2429. Available from: [8] S. Nooruddin, M.M. Islam, F.A. Sharna, An IoT based device-type invariant fall detection system, Internet of Things, Elsevier, 2020. [9] O. Al-Mahmud, K. Khan, R. Roy, F. Mashuque Alamgir, Internet of Things (IoT) based smart health care medical box for elderly people, in: 2020 International Conference for Emerging Technology (INCET), Belgaum, India, 2020, pp. 16, doi: 10.1109/INCET49848.2020.9153994. [10] H. Tsai, C.H. Tseng, L. Wang, F. Juang, Bidirectional smart pill box monitored through internet and receiving reminding message from remote relatives, in: 2017 IEEE International Conference on Consumer Electronics—Taiwan (ICCE-TW), Taipei, 2017, pp. 393394, doi: 10.1109/ ICCE-China.2017.7991161. [11] K. Serdaroglu, G. Uslu, S. Baydere, Medication intake adherence with real time activity recognition on IoT, in: 2015 IEEE 11th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Abu Dhabi, 2015, pp. 230237, doi: 10.1109/ WiMOB.2015.7347966. [12] Autobed, , [13] U. Garg, K.K. Ghanshala, R.C. Joshi, R. Chauhan, Design and implementation of smart wheelchair for quadriplegia patients using IOT, in: 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC), Jalandhar, India, 2018, pp. 106110, doi: 10.1109/ICSCCC.2018.8703354. [14] A. Rajith, S. Soki, M. Hiroshi, Real-time optimized HVAC control system on top of an IoT framework, in: 2018 Third International Conference on Fog and Mobile Edge Computing (FMEC), Barcelona, 2018, pp. 181186, doi: 10.1109/FMEC.2018.8364062. [15] S.U. Amin, M.S. Hossain, G. Muhammad, M. Alhussein, M.A. Rahman, Cognitive smart healthcare for pathology detection and monitoringin IEEE Access 7 (2019) 1074510753. [16] T. Edoh, Internet of Things in Emergency Medical Care and Services, 2018. 10.5772/intechopen.76974. [17] M.W. Woo, J.W. Lee, K.H. Park, A Reliable IoT System for Personal Healthcare Devices.


Applications of Computational Intelligence in Multi-Disciplinary Research

[18] M.G.R. Alam, S.F. Abedin, S.I. Moon, A. Talukder, C.S. Hong, Healthcare IoT-based affective state mining using a deep convolutional neural networkin IEEE Access 7 (2019) 7518975202. [19] J. Leng, Z. Lin, P. Wang, Poster abstract: an implementation of an Internet of Things system for smart hospitals, in: 2020 IEEE/ACM Fifth International Conference on Internet-of-Things Design and Implementation (IoTDI), Sydney, Australia, 2020, pp. 254255, doi: 10.1109/ IoTDI49375.2020.00034. [20] V. Ahmadi, S. Benjelloun, M. El Kik, T. Sharma, H. Chi, W. Zhou, Drug governance: IoT-based blockchain implementation in the pharmaceutical supply chain, in: 2020 Sixth International Conference on Mobile and Secure Services (MobiSecServ), Miami Beach, FL, USA, 2020, pp. 18, doi: 10.1109/MobiSecServ48690.2020.9042950. [21] M. Saravanan, A.M. Marks, MEDIBOX—IoT enabled patient assisting device, in: 2018 IEEE 4th World Forum on Internet of Things (WFIoT), Singapore, 2018, pp. 213218, doi: 10.1109/WF-IoT.2018.8355207. [22] S.K. Sood, I. Mahajan, IoT-fog-based healthcare framework to identify and control hypertension attackin IEEE Internet of Things Journal 6 (2) (2019) 19201927. [23] K. Wang, Y. Shao, L. Xie, J. Wu, S. Guo, Adaptive and fault-tolerant data processing in healthcare IoT based on fog computingin IEEE Transactions on Network Science and Engineering 7 (1) (2020) 263273. [24] P. Pace, G. Aloi, R. Gravina, G. Caliciuri, G. Fortino, A. Liotta, An edge-based architecture to support efficient applications for healthcare industry 4.0in IEEE Transactions on Industrial Informatics 15 (1) (2019) 481489. [25] A.F. Subahi, Edge-based IoT medical record system: requirements, recommendations and conceptual designin IEEE Access 7 (2019) 9415094159. [26] M.A. Khan, An IoT framework for heart disease prediction based on MDCNN classifierin IEEE Access 8 (2020) 3471734727. [27] B. Sargunam, S. Anusha, IoT based mobile medical application for smart insulin regulation, in: 2019 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT), Coimbatore, India, 2019, pp. 15, doi: 10.1109/ICECCT.2019.8869227. [28] T.A. Ahanger, A. Aljumah, Internet of Things: a comprehensive study of security issues and defense mechanismsin IEEE Access 7 (2019) 1102011028. [29] M. Wazid, A.K. Das, J.J.P.C. Rodrigues, S. Shetty, Y. Park, IoMT malware detection approaches: analysis and research challengesinIEEE Access 7 (2019) 182459182476. Available from: [30] O.S. Albahri, et al., Fault-tolerant mhealth framework in the context of IoT-based real-time wearable health data sensorsinIEEE Access 7 (2019) 5005250080. Available from: [31] , [32] K. AnithaKumari, R. Padmashani, R. Varsha, V. Upadhayay, Securing Internet of Medical Things (IoMT) using private blockchain network, in: S.L. Peng, S. Pal, L. Huang (Eds.), Principles of Internet of Things (IoT) Ecosystem: Insight Paradigm. Intelligent Systems Reference Library, 174, Springer, 2020. [33] , [34] , [35] V. Chamola, V. Hassija, V. Gupta, M. Guizani, A comprehensive review of the COVID-19 pandemic and the role of IoT, drones, AI, blockchain, and 5G in managing its impactinIEEE Access 8 (2020) 9022590265. Available from: [36] Smart Sink, ,,7340,L-3799860,00.html.. [37] , [38] B. Mohanta, P. Das, S. Patnaik, Healthcare 5.0: a paradigm shift in digital healthcare system using artificial intelligence, IOT and 5G communication, in: 2019 International Conference on Applied Machine Learning (ICAML), Bhubaneswar, India, 2019, pp. 191196, doi: 10.1109/ ICAML48257.2019.00044. [39] S.U. Amin, M.S. Hossain, G. Muhammad, M. Alhussein, M.A. Rahman, Cognitive smart healthcare for pathology detection and monitoringin IEEE Access 7 (2019) 1074510753.

Index Note: Page numbers followed by “f” and “t” refer to figures and tables, respectively.

A Ab initio methods, 132 Accelerometer, 188 Access network terminals, 88 Acetylcholinesterase (AChE), 150 Activated ingredient (AI), 150151 Activity understanding, 109 Adenine, 68 Adenosine (A), 17 Advanced optimization techniques, 5254 Affective computing, 189 Agglomeration, 62 Agriculture, 143144 nanofarming, 143144 applications of nanotechnology in agriculture, 144f Alchemy, 131132 Algorithm, 7278 Alternaria alternate, 150151 Analog input (AI), 98 Analogical reasoning, 165 Analytical hierarchy process, 177 Android-based malware classification methodologies, 33 Annealing, 53 Application programming interface (API), 29, 38 Application protocol data unit (APDU), 106 Application service data unit (ASDU), 96 Applications of computational nanotechnology, 137139 Architectural Consultant Selection System (ACSS), 177 Arginine golden nanocarbons (ArgNPs), 145146 Artificial intelligence (AI), 30, 188, 202203 with fog, 63 search techniques, 111 Artificial neural networks (ANNs), 17, 2930, 33, 129130, 136 Ascendant path, 88 Aspergarillus furnigatus, 144145 Aspergillus flavus, 144145, 150151 Asynchronous transfer mode (ATM), 93 Attacks on IoT devices, 196197 blackhole attack, 196 denial-of-service attack, 197 selecting forwarding attack, 197 sinkhole attack, 196 sybil attack, 197

wormhole attack, 197 Authentication mechanism, 68 Autobed (robotically controlled bed), 188 Average ratio (AVR), 58

B Backward chaining, 166 Bandwidth, 102 Baseband layer, 67 Bayesian networks, 19, 2930 Bees algorithm, 59 Behavioral data, 33 Behavioral malware detection in deep learning, 3335, 34f, 35f data required for malware detection, 34t malicious traffic detection using ML, 33f Binary statistical image feature (BSIF), 34 Biometric features, 1 Bitter melon fruits (Momordica charantia), 148149 Blackhole attack, 196 Blind search. See Uninformed search Blockchain technology, 190, 198 Bluetooth, 67 plaintext-to-ciphertext conversion process, 7178 security architecture, 6769 survey of literature, 6971 DNA codon, 70f RNA codon, 70f technology, 69 BodyEdge framework, 192 Box (of variables), 50 Branch-and-bound search, 60, 62 Brassica juncea, 144145 Bresenham’s line drawing algorithm, 115 Byte code n-gram features, 3637

C C (set of concepts), 170, 177 C#(programming tools), 165 Camera coverage modeling, 114115 geometric coverage model, 112113 placement problem, 109110, 112113 from artificial intelligence perspective, 111112 efficiency of algorithms, 122

performance of algorithms, 123 using artificial intelligence search, 116120 visual sensors modeling, 113115 visual surveillance systems, 110111 visibility analysis, 115 Carbon nanotubes, 138 Cas proteins CRISPR/Cas9 network, 145146 Case-based reasoning, 166 CASIA-IRIS-V4 dataset, 11 CASIAIris-V1 dataset, 6, 11 Cells, 60, 68 Central dispatchers (CDs), 97 Challengeresponse scheme, 68 Chamaemelum nobile flowers, 152 Checkig consistency of knowledge base, 164 Chromosomes, 53, 68, 135136 Ciphertext, masking of, 7576 Circular edge detection method, 12 Classical optimization techniques, 52 Classification, 109 Classification and Regression Trees (CARTs), 24 Climate change, 149 Clostridium aerobic, 144145 Cloud, 191192 Cloud computing, 47 case studies, 6063 floorplan optimization, 6062 Gondwana (optimization of drinking water distribution system), 63 with emphasis on fog/edge computing, 4849 fog architecture, 48f optimization, 4854 optimizing fog resources, 5760 scope of advancements and future research, 63 advantages and disadvantages of HC, SA, and GA, 63t annotations for framework of fog computing, 64t optimization problem types and description, 64t understanding fog/edge computing, 5457 Clustering, 19 Code division multiple access (CDMA), 89 Collective area, 60 Combination key, 67 Communication, 62




Complete neglect of differential overlap (CNDO), 132 Computational intelligence (CI), 17, 135137 artificial neural networks, 136 fuzzy system, 136137 genetic algorithms, 135136 Computational modeling, 130131 Computational nanotechnology, 130137 applications of, 137139 computational intelligence, 135137 high-performance computing, 135 molecular modeling, 131133 nanodevice simulation, 133 nanoinformatics, 133135 Conceptual knowledge, 168 Connectivity, 9293 Constrained optimization, 51 Constraints, 50 basis of, 51 Consultancy system for designing housing architecture, 174176, 175f ability to solve exercises of methods, 176t designing inference engine of consultancy system, 177180 organizing knowledge base of consultancy system, 176t, 177 testing, 180183, 180f Control of plant pests, 152 Convolutional neural networks (CNNs), 30, 40, 193 algorithm, 3 architecture of CNNs for malware detection, 4142 classification using, 4142 common architecture of CNN, 42f evaluation, 42 malware detection system, 41f preprocessing, 41 comparative analysis of CNN approaches, 42, 43t Copper oxide (CuO), 152 Coronavirus disease (2019) (Covid 19), 198202 decoding outbreak and identifying patient zero, 199 IoT and robotics, 200201 IoT segments, 199f pandemic, 188, 195 public surveillance, 200 quarantined patient care, 199200 safeguarding hygiene, 200 smart disinfection and sanitation tunnel, 201 smart masks and smart medical equipment, 201202 Cost function, 57 Counting information (CI), 98 CRISPR spacer, 145146 CRISPR/Cas9 Single Lead RNA (sgRNA), 145146 Crypt-intelligent cryptosystem methodology, 2325 block diagram, 24f, 25f classification of input message, 24f decryption scheme, 25

DNA encryption mechanism, 24f encryption scheme, 2425 related work, 1823 background of DNA cryptography, 23 genetic algorithm contributions in cryptology, 2021 machine learning contributions in cryptology, 1820 neural network contributions in cryptology, 2123 Cryptanalysis applications of genetic algorithm in, 21 MI in, 20 neural networks in, 22 Cryptanalysts, 32 Cryptographic/cryptography, 17, 197 analogy, 1819 applications of genetic algorithm in, 2021 neural networks in, 2122 attacks, 20 hashing, 39 keys, 20 ML in, 1920 Cryptology analysis of contribution of neural network in, 2223 analysis of existing contributions of genetic algorithms in, 21 machine learning in, 20 genetic algorithm contributions in, 2021 machine learning contributions in, 1820 neural network contributions in, 2123 Customer premises equipment (CPE), 88 Cytosine (C), 17, 68

D Data processing, 85 representation, 3536 transfer speed, 95 Decision trees (DT), 1819, 3435 Decision variables, permissible values of, 52 Decryption, 7778 conversion of ciphertext without intron and promoter to plaintext, 7778 removal of introns, 77 mask, 77 promoter, 77 Deductive reasoning, 165 Deep convolutional neural networks, 194 Deep learning approaches, 2930 architecture, 193 of CNNs for malware detection, 4142 behavioral analysis of malware detection, 3335 challenges and future research directions, 43 comparative analysis of CNN approaches, 42 digital forensics, 3031 with fog, 63

machine learning techniques for malware analysis, 3233 for malware detection, 3541 dynamic analysis, 38 feature extraction and data representation, 3536 hybrid analysis, 38 image processing techniques, 3841 static analysis, 3638 malware evolution and taxonomy, 32 Defensive mechanism, 197198 blockchain technology, 198 fault tolerance, 198 intrusion detection, 197 key management, 197 user/device authentication and authorization, 197 Demand assignment multiple access (DAMA), 89 Denial of Service attack (DoS attack), 29, 197 Density functional theory (DFT), 132 Deployment of medical staff, 190 Descendent path, 88 Design variables, basis of nature of, 51 Desulfovibrio desulfuricans NCIMB 8307, 144145 Deterministic nature of variables, basis of, 52 Deterministic optimization, 52 Difference of Gaussian (DoG), 34 Digital evidence integrity, 3031 Digital forensics science, 3031, 31f Digital input (DI), 98 Digital Video Broadcast (DVB), 93 Digital Video BroadcastingReturn Channel via Satellite (DVB-RCS), 88 Digitized 2D space, 114 Discrete cosine transform (DCT), 3 Discrete optimization, 110 Discrete wavelet transform (DWT), 34 Distributed Denial of Service (DDoS), 29 DNA, 68 codon, 7071 conversion, 7273 cryptography, 17 analysis of existing work in, 23 background of, 23 molecules, 145 Double-hop connection, 89 Dracocephalum moldavica, 148149 Drinking water distribution system (DWDS), 62 Gondwana (optimization of drinking water distribution system), 63 Drugs management, 190 Dynamic analysis, 38 comparison, 39t malware representation, 36f Dynamic optimization, 51 Dynamic power consumption, 56

E e-Science, 134 Earthworm optimization algorithm (EWA), 58


Edge computing, 5457, 192193. See also Cloud computing advantages, 192193 empowering edge computing, 193 architecture, 192 cloud, 192 edge nodes, 192 IoT nodes/applications, 192 standard implementation model, 192f cloud computing with emphasis on, 4849 Edge IoT gateway, 192 Edge nodes, 192 Efficiency enhancement of seed nanomaterials, 148149 Electrocardiogram (ECG), 189, 194 Electrodermal activity (EDA), 194 Electroencephalogram (EEG), 189 Electromyography (EMG), 194 Electronic ledger, 198 Encryption, 7172 machinery, 24 Energy efficiency, 56 Engineering NP, 143144 Enhancer, 69 Entropy analysis, 31 Euclidean distance (ED), 11 Evaluation phase, 42 Evidence sources, 3031 Exchange rate, 98 Exclusive OR (XOR), 2 Extra data, 7677

F False acceptance rate (FAR), 3 False rejection rate (FRR), 3 Fast-Hessian detectors, 3 Fault tolerance, 198 in IoT nodes, 198f Feature extraction, 3536 classification, 36f scheme, 710 Feature map, 4142 Feeder-link, 88 “Field knowledge”, 159160 Field production of seed nanomaterials, 148149 Field-programmable gate arrays (FPGAs), 135 Fifth generation (5G), 202 Financial cost, 57 5G wireless networking, 47 FIX problems, 113 Floorplan Design Algorithm, 61 Floorplan optimization, 6062, 61f Fog, 47, 54 artificial intelligence with, 63 deep learning with, 63 defining optimization problem for fog layer resources, 5758 network, 57 nodes, 191 resources, 5760 defining optimization problem for fog layer resources, 5758

optimization techniques, 5860 Fog computing, 5457, 191192 architecture, 191 advantages, 192 cloud, 191 fog nodes, 191 smart IoT devices/applications, 191 standard implementation of fog computing model, 191f cloud computing with emphasis on, 4849 framework for, 5557 goal, 55 prelude to framework, 5455 Fog computing nodes (FCNs), 57 Forward chaining, 166 Forwarding attack, 197 4-API-call-grams, 38 Frequency division multiple access (FDMA), 89 Funcs (set of functions), 170171, 177 Functional knowledge, 168 Fusarium F. graminearum, 150151 F. oxysporum, 144145, 150151 Fusion segmentation level (FSL), 34 Fuzzy inference systems, 137 Fuzzy numbers, 137 Fuzzy rule bases, 136137 Fuzzy set, 137 Fuzzy systems, 129130, 136137

G GAN executor, 3940 Gateway traffic, 3233 Gene, 68 General Packet Radio Service (GPRS), 86 Generate and test algorithm, 117118 Genetic algorithms (GAs), 17, 53, 58, 129130, 135136 contributions in cryptology, 2021 in cryptanalysis, applications of, 21 in cryptography, applications of, 2021 operators of genetic algorithm, 21t in cryptology, analysis of existing contributions of, 21 Genetic code, 69 Geographic information system (GIS), 199 Geometric optimization programming, 51 Germination of seed nanomaterials, 148149 Global Operation Center (GOC), 95 Global positioning system (GPS), 150 Gondwana (optimization of drinking water distribution system), 63 Gradient face-based normalization (GRF), 34 Graphics processing units (GPUs), 135 Greedy strategies, 119120 Green nanoparticles and sources, synthesis of, 144145 Gross bandwidth, 102 Growth process, 146148 Guanine (G), 17, 68


H H (set of hierarchical relations), 170 Haar wavelet transformation (HWT), 2 Hamming distance (HD), 11 HarrisLaplace detectors, 3 HartreeFock method (HF method), 132 Hashing, 39 cryptographic, 39 robust, 39 Healthcare ecosystem, 187 industry, 187 IoT-based state mining using deep convolutional neural networks, 194 electrocardiogram, 194 electrodermal activity, 194 electromyography, 194 Heart disease prediction, 193194 IoT framework of heart disease prediction model, 193f Heating, ventilation, and air conditioning (HVAC), 189 Helianthus annus, 144145 HessianLaplace detectors, 3 Hierarchy, 159 High-performance computing, 135 Hill climbing strategy (HC strategy), 52, 119120 Histones, 68 Hit ratio, 31 Hospital-centric IoT, 189190 deployment of medical staff, 190 drugs management, 190 real-time location of medical equipment, 190 reducing charting time, 190 Housing architecture, 177 Hub, 89 Human Genome Project, 134 Human reasoning, 165166 Humancomputer interaction, 189 HWTMLBP method, 3 Hybrid analysis, 38 HyperChem, 131132

I Image processing techniques, 3841, 40f Implicit knowledge. See Tacit knowledge Inbound data stream, 91 Inductive reasoning, 165 Inference engine (IE), 160 criteria of, 165 design, 164168, 173 algorithm for solving problems in knowledge domain of solid geometry, 174f of consultancy system, 177180 functions of knowledge base, 178t process for, 164165 reasoning methods, 165168 of knowledge-based system, 163168 principles of, 165 process for designing, 164165 Informational object address (IOA), 96



Initialization key, 68 Input interface, 163 Input/output connections (I/O connections), 90 Insulator, 69 Integer optimization, 52 Intelligent computational nanotechnology, 129130 Intelligent problem solver design for solving solid geometry at high school, 168183 inference engine, designing, 173 knowledge base, organizing, 171173 knowledge domain, collecting, 168169 knowledge model, building, 170171 testing, 173174 Intelligent system, 160 Interactive Disassembler (IDA), 3536 Interactivity, 162 International Association of Fertilizer Industry (IFIA), 146148 International Electrotechnical Commission (IEC), 9596 Internet Control Message Protocol (ICMP), 97 Internet disruption, 195 Internet of Medical Things (IoMT), 194 Internet of Things (IoT), 29, 47, 190 applications of, 188190 hospital-centric IoT, 189190 IoT benefitting health insurance companies, 190 patient-centric IoT, 188189 pharmaceutical governance, 190 attacks on IoT devices, 196197 challenges in healthcare IoT, 195196 adapting to remote healthcare and telehealth, 195 data security, 195196 technology-oriented challenges, 195 Covid 19, 198202 future of IoT in healthcare, 202203 IoT and 5G, 202 IoT and artificial intelligence, 202203 implementation methodologies, 190194 edge computing, 192193 fog computing, 191192 healthcare IoT-based affective state mining, 194 heart disease prediction, 193194 IoT-based device-type invariant fall detection system, 188 IoT-based robotics, 188 nodes/applications, 192 paradigm, 30 security issues and defense mechanisms and IoT, 196198 attacks on IoT devices, 196197 defensive mechanism, 197198 security requirements in healthcare IoT, 196 technology, 188 Internet Protocol, 92 Intron(s), 69, 75 addition, 7475 intron number generation, 74 placing introns at positions, 75

position to place introns, 7475 number generation, 74 removal of, 77 Intrusion detection system (IDS), 197 IoMTbased Affective State Mining (IASM), 194 Iris feature extraction, 3 feature extraction scheme, 710 block diagram, 8f center element, 10f MLBP operation, 10f three-level HWT, 8f, 9f three-level wavelet decomposition, 9f iris localization, 45 literature review, 5t iris normalization, 6 illustrations of Daugman’s rubber sheet model, 7f matching results, 11 performance evaluation, 1113 comparison of accuracy of proposed method, 12t comparison of results with existing methods, 13t description of datasets, 12t original iris image CASIA-IRIS-V1, 12f original iris image CASIA-IRIS-V4, 11f original iris image MMU dataset, 12f related works, 34 Iris recognition systems, 1, 2f

K k-nearest neighbors (kNN), 3435, 160 Kerckhoffs’s Principle, 20, 25 Klebsiella aerogens, 144145 Knowledge base (KB), 159, 163168 checking consistency of knowledge base, 164 design, 160, 163164 basic knowledge manipulations, 164 organizing the knowledge base, 163164, 171173, 177 unification of facts, 164 updating knowledge base, 164 Knowledge domain, 168169 functions of solid geometry, 169t relations between concepts of solid geometry, 168t Knowledge model, building, 170171 Hasse diagram, 170f Knowledge representation, 163 Knowledgebased systems (KBS), 159163 applications, 168183 consultancy system for designing housing architecture, 174176 design intelligent problem solver for solving solid geometry, 168183 architecture, 160162, 161f design, 160163 knowledge base and inference engine of, 163168 process for designing, 162163 related work, 160

L Lagrange multiplier method, 52 Latency/delay time efficiency, 5556 Learning model, 32 Linear discriminant analysis (LDA), 34 Linear predictive coding coefficients (LPCC), 34 Linear programming, 51 Link key, 6768 Link manager, 67 Link privacy, 68 Local area network (LAN), 85 Local binary pattern (LBP), 2 Logical Link communication and Adaptation Protocol (L2CAP), 67 Long short-term memory model2 (LSTM-2), 33 Low noise block downconverter (LNB), 92 Low-range gateway (LoRa gateway), 193194

M Machine learning (ML), 4, 17, 2930. See also Deep learning algorithms, 36, 42 analogy between machine learning and cryptography, 1819 decision tree, 19f example intrusion data, 18t analysis of existing contributions of, in cryptology, 20 contributions in cryptology, 1820 in cryptanalysis, 20 in cryptography, 1920, 19f techniques for malware analysis, 3233 Magnesium hydroxide (MgOH), 152 Magnesium oxide (MgO), 152 Malicious traffic detection, 3233 Malware detection, 3031 deep learning strategies for, 3541 evolution and taxonomy, 32 homology analysis system, 40 images, 39 Maple (programming tool), 165 Mapping, 62 Mask, removal of, 77 Master key, 68 Medicago sativa, 144145 Medium Frequency TDMA (MF-TDMA), 92 Medjacking, 196 Memory-based methods, 160 MerkelHellman cryptosystem, 21 Metadata, 34 Methionine (Met), 69 MIN problems, 113 MM3, 131132 Model-based methods, 160 Modified deep convolution neural network (MDCNN), 193 Modified local binary pattern (MLBP), 2 Molecular docking, 138 Molecular dynamics (MD), 131133 simulations, 139


Molecular mechanics, 131132 Molecular modeling, 131133 Mucor plumbeus, 150151 Multilayer perceptron (MLP), 136 Multiple access, 9395 Multiple-objective optimization, 52 Multipoint-to-multipoint connectivity, 92 Multipoint-to-point connectivity, 92 Multiprotocol Label Switching (MPLS), 86 Multiradio, 192 Multitechnology communication, 192 Multiwall carbon nanotubes (MWCNT), 148

N Naive Bayes (NB), 3435 Nano-SiO2, 148 Nano-TiO2, 148 Nanobiotechnology, 145146 Nanodevice simulation, 133 Nanofertilizers, 146148 Nanoinformatics, 133135 for drugs, 138 Nanomaterials, 150 engineering, 143 Nanoparticles (NP), 129, 143 NP-hard problem complexity, 110, 112 Nanoscience, 129130, 148 Nanosensors, 150 Nanotechnology, 129 computational nanotechnology, 130137 nanoscience and, 130 in pesticides and fertilizers, 151152 in plant development and crop protection agriculture’s nanofarming, 143144 control of plant pests, 152 germination, field production, and efficiency enhancement of seed nanomaterials, 148149 modern sustainable agriculture portal, 145146 nanofertilizers, 146148 nanosensors and nanomaterials, 150 pesticide-based plant safety nanomaterials, 150151 plant sensory systems and responses to radical climate change, 149 synthesis of green nanoparticles, 144145 Nanotoxicology, 138139 Nanotube-based sensors and actuators, 138 Net bandwidth, 102 Network vulnerabilities, 18 Neural cryptography, 21 Neural networks (NNs), 17 analysis of contribution of, in cryptology, 2223, 22f in cryptanalysis, 22 in cryptography, 2122 Nitrogen (N), 146148 Nodes, 3334 Noncoding genes, 6869 Nonexpertise detective tools, 31 Nonlinear programming, 51 Nonoptimal control problems, 51

Nonseparable nature of variables, 51 Nuclear magnetic resonance spectrometry (NMR spectrometry), 139 Number of false rejections (NFR), 1113

O Objectives, 50, 52 Occlusion, 2 Olea europaea leaves, 152 Oligonucleotides, 145 Omron HearGuid-bp8000m (smart watch), 193194 Onboard connectivity, 9293 1D Log-Gabor filters, 3 One-time pads (OTPs), 2021 Opcode n-gram features, 3738 sequences, 40 Optimal control problems, 51 Optimization, 4748, 111 problem, 4952 classification of, 5052 elements of, 50 solution to, 5254 techniques, 4954, 5860 Outbound data stream, 91

P P (set of criteria for searching solution), 178 Pareto-optimal solution, 55 Password processing, 85 Pathology and fatal viral/bacterial diseases, 189 Patient charting, 190 Patient-centric IoT, 188189 affective computing, 189 critical and emergency patient care, 189 food and workout monitoring, 189 pathology and fatal viral/bacterial diseases, 189 remote patient care, 188189 Pattern problem, 167 Pattern traceability, 31 Pedagogy, 162 Penicillium expansum, 150151 Pesticide-based plant safety nanomaterials, 150151 Phanerochaete chrysoporium, 144145 Pharmaceutical governance, 190 Pharmaceutical industry, 190 Phase intensive local pattern (PILP), 3 Phosphorus (P), 146148 Physical layer, 67 Physical structure of problem, 51 Plaintext-to-ciphertext conversion process, 7178 algorithm, 7278 decryption, 7778 encryption, 7273 extra data, 7677 intron addition, 7475 masking of ciphertext, 7576 promoter addition, 7374


analysis, 78 basic workflow, 7172, 72f decryption, 72 encryption, 72 Plant sensory systems and responses to radical climate change, 149 Point-to-multipoint connectivity, 92 Point-to-point connectivity, 92 Police forces, 30 Portable document file (PDF), 40 Portable executables (PEs), 36, 38 Potassium (K), 146148 Power efficiency, 56 Power usage effectiveness (PUE), 56 Preprocessing, CNN, 41 Privacy protection, 29 Probability theory, 137 Process Modeling Software (PMS), 87 Processing delay (PRd), 5556 Prokaryotic RNA, 145146 Promoter(s), 69 addition, 7374 generation, 7374 removal of, 77 Propagation delay (Pd), 5556 Pros (set of solutions), 178 Protein formation, 71 synthesis, 6869 Pseudomonas stuzeri, 144145 Punica granatum peels, 152 Python (programming tools), 165

Q Quadriplegia patients, 188 Quanta, 131132 Quantum methods, 132 Quantum-dot cellular automata simulator (QCA simulator), 133 Quarantined patient care, 199200 drones to monitor patients and deliver medicines, 200f Queuing delay (Qd), 5556

R R (set of relations between concepts), 170 Radio frequency identification (RFID), 196 Random Access Memory (RAM), 3031 Raspberry Pi, 188 Real optimization, 52 Real-time location of medical equipment, 190 Reasoning backward chaining, 166 forward chaining, 166 methods, 165168 with pattern and sample problems, 167168 process, 165 Recommendation system, 160 Regression, 18 Relational knowledge, 168 Relational measure (RM), 34 Remote healthcare, adapting to, 195



“Remote Medical Care”, 202 Remote patient care, 188189 Remote terminal units (RTUs), 8485 “Report by exception” protocol, 107 RGB-colored pixels, 40 Rhizopus stolonifer, 150151 Ribonucleic acid (RNA), 69 codon conversion, 7273 processing, 69 Robotics and IoT, 200201 Robust hashing, 39 Rules (set of inference rules), 171, 177178

S Sample problem, 167 SARS-CoV-2 (coronavirus), 199 Satellite bandwidth algorithms, 99104 for estimating, 95106 case study, 9899 determining bandwidth required for data transmission, 9598 validation of bandwidth calculations, 104106 Satellite communication systems, 8789 technologies, 84 Satellite networks, 8788 Scaffold, 68 Schro¨dinger equation, 132 Security requirements in healthcare IoT, 196 authentication, 196 confidentiality, 196 integrity, 196 Seed germination, 148 Segmentation, 109 Semiempirical method, 132 Semipermanent key, 67 Sensor Data Processing Framework, 191 Sensor space modeling, 114 Separable nature of variables, 51 Sequential recording of events, 85 Servers, 33 Service-level agreement (SLA), 94 Sesbania, 144145 Session bandwidth, 95 Set theory, 137 Side-channel attacks, 20 Silencer, 69 Simulated annealing (SA), 53 Single walled carbon nanotubes (SWCNTs), 150 Single-hop connection, 89 Single-objective optimization, 52 Single-walled nanotubes (SWNTs), 138 Sinkhole attack, 196

Sloan Digital Sky Survey, 134 Small interfering RNA (siRNA), 145146 Smart IoT devices/applications, 191 Smart masks, 201202, 202f Smart medical equipment, 201202 Smartphone, 188 Soapy Care’s systems, 200 Solid geometry, 173 Spatial Pyramid Pooling technique (SPP technique), 4142 “Star”-topology, 95 State space of problem, 112 Static analysis, 3638 byte code n-gram features, 3637 opcode n-gram features, 3738 PEs, 38 static malware analysis, 37t static tools and features, 37t string feature, 38 Static power consumption, 56 Station query process, 99 Stochastic optimization, 52 String feature, 38 Supervisory control and data acquisition (SCADA), 84 algorithm for estimating satellite bandwidth, 95106 challenges and future work, 106107 systems, 8487 very small aperture terminal networks, 8795 Support vector machine (SVM), 2930, 3435, 194 Sustainability of agriculture, 143 of ecosystem, 143 Sustainable cultivation, 143 Sybil attack, 197 Sybyl, 131132

Time division multiplexing (TDM), 92 Time stamping, 31 Topological coverage model, 112113 Tracking, 109 Trajectory optimization, 51 Transfer speed, 95 Transmission Control Protocol (TCP), 89 Transmission Control Protocol/Internet Protocol (TCP/IP), 95 Transmission delay (Td), 5556 Transponder, 91 2D Log-Gabor filters, 3

U Unconstrained optimization, 51 Unification of facts, 164 Uninformed search, 118119 Unit key, 67 Updating knowledge base, 164 User interface, 163 Utility function, 57

V Verticillium sp, 144145 Very small aperture terminal networks (VSAT networks), 84, 8795 architecture, 8992 communication system, 106 connectivity, 9293 multiple access, 9395 satellite communication systems, 8789 Very-large-scale integration (VLSI), 60 Video acquisition, 109 Virtual private network (VPN), 85 Visual sensors modeling, 113115 camera coverage modeling, 114115 camera visibility analysis, 115 sensor space modeling, 114 Visual surveillance systems, 110111

T Tacit knowledge, 160 Technology-oriented challenges, 195 incorrect results, 195 need for specialized tool to handle diversified protocols, 195 no planned downtime, 195 remote places with lack of infrastructure and connectivity, 195 risking patient’s life, 195 Telegram, 96 Telehealth, 188189 adapting to, 195 3-API-call-grams, 38 Thymine (T), 17, 68 Time division multiple access (TDMA), 89, 91

W Web-based interface, 188 Wi-Fi radio, 194 Wormhole attack, 197

X XOR. See Exclusive OR (XOR)

Z Zeolite, 148 Zinc oxide (ZnO), 152 Zygomatics major activity (ZMA), 194