Handbook of Machine Learning for Computational Optimization: Applications and Case Studies (Demystifying Technologies for Computational Excellence) [1 ed.] 0367685426, 9780367685423

Technology is moving at an exponential pace in this era of computational intelligence. Machine learning has emerged as o

188 92 16MB

English Pages 280 [295] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Half Title
Series Page
Title Page
Copyright Page
Table of Contents
Preface
Editors
Contributors
Chapter 1 Random Variables in Machine Learning
1.1 Introduction
1.2 Random Variable
1.2.1 Definition and Classification
1.2.1.1 Applications in Machine Learning
1.2.2 Describing a Random Variable in Terms of Probabilities
1.2.2.1 Ambiguity with Reference to Continuous Random Variable
1.2.3 Probability Density Function
1.2.3.1 Properties of pdf
1.2.3.2 Applications in Machine Learning
1.3 Various Random Variables Used in Machine Learning
1.3.1 Continuous Random Variables
1.3.1.1 Uniform Random Variable
1.3.1.2 Gaussian (Normal) Random Variable
1.3.2 Discrete Random Variables
1.3.2.1 Bernoulli Random Variable
1.3.2.2 Binomial Random Variable
1.3.2.3 Poisson Random Variable
1.4 Moments of Random Variable
1.4.1 Moments about Origin
1.4.1.1 Applications in Machine Learning
1.4.2 Moments about Mean
1.4.2.1 Applications in Machine Learning
1.5 Standardized Random Variable
1.5.1 Applications in Machine Learning
1.6 Multiple Random Variables
1.6.1 Joint Random Variables
1.6.1.1 Joint Cumulative Distribution Function (Joint CDF)
1.6.1.2 Joint Probability Density Function (Joint pdf)
1.6.1.3 Statistically Independent Random Variables
1.6.1.4 Density of Sum of Independent Random Variables
1.6.1.5 Central Limit Theorem
1.6.1.6 Joint Moments of Random Variables
1.6.1.7 Conditional Probability and Conditional Density Function of Random Variables
1.7 Transformation of Random Variables
1.7.1 Applications in Machine Learning
1.8 Conclusion
References
Chapter 2 Analysis of EMG Signals using Extreme Learning Machine with Nature Inspired Feature Selection Techniques
2.1 Introduction
2.2 Data Set
2.3 Feature Extraction
2.4 Nature Inspired Feature Selection Methods
2.4.1 Particle Swarm Optimization Algorithm (PSO)
2.4.2 Genetic Algorithm (GA)
2.4.3 Fire-Fly Optimization Algorithm (FA)
2.4.4 Bat Algorithm (BA)
2.4.5 Whale Optimization Algorithm (WOA)
2.4.5.1 Exploitation Phase
2.4.5.2 Exploration Phase
2.5 Extreme Learning Machine (ELM)
2.6 Results and Discussion
2.7 Conclusion
References
Chapter 3 Detection of Breast Cancer by Using Various Machine Learning and Deep Learning Algorithms
3.1 Introduction
3.1.1 Risk Factors for Breast Cancer
3.1.2 Screening Guidelines
3.1.3 Consequences of Misidentifying the Tumor
3.1.4 Materials and Methods
3.2 Model Selection
3.2.1 Logistic Regression
3.2.2 Nearest Neighbor
3.2.3 Support Vector Machine
3.2.4 Naive Bayes Algorithm
3.2.5 Decision Tree Algorithm
3.2.6 Random Forest Classification
3.3 Detection of Breast Cancer by Using Deep Learning
3.4 Conclusion
References
Chapter 4 Assessing the Radial Efficiency Performance of Bus Transport Sector Using Data Envelopment Analysis
4.1 Introduction
4.1.1 Background Work
4.2 Methodology Framework
4.2.1 DEA Background
4.2.2 New Slack Model
4.3 Performance Evaluation of Depots
4.3.1 Data Collection
4.3.2 Region-wise Classification of Depots
4.3.3 Input and Output Parameters
4.3.4 Empirical Results
4.3.5 Input Targets for Inefficient Depots
4.4 Conclusion
Acknowledgement
References
Appendix (A)
Chapter 5 Weight-Based Codes—A Binary Error Control Coding Scheme—A Machine Learning Approach
5.1 Introduction
5.2 Encoding
5.3 Decoding (Machine Learning Approach)
5.3.1 Principle of Decoding
5.3.2 Algorithm
5.4 Output Test Case
5.5 Conclusion
References
Chapter 6 Massive Data Classification of Brain Tumors Using DNN: Opportunity in Medical Healthcare 4.0 through Sensors
6.1 Introduction
6.1.1 Brain Tumor
6.1.2 Big Data Analytics in Health Informatics
6.1.3 Machine Learning (ML) in Healthcare
6.1.4 Sensors for Internet of Things
6.1.5 Challenges and Critical Issues of IoT in Healthcare
6.1.6 Machine Learning (ML) and Artificial Intelligence (AI) for Health Informatics
6.1.7 Health Sensor Data Management
6.1.8 Multimodal Data Fusion for Healthcare
6.1.9 Heterogeneous Data Fusion and Context-Aware Systems—a Context-Aware Data Fusion Approach for Health-IoT
6.1.10 Role of Technology in Addressing the Problem of Integration of Healthcare System
6.2 Literature Survey
6.3 System Design and Methodology
6.3.1 System Design
6.3.2 CNN Architecture
6.3.3 Block Diagram
6.3.4 Algorithm(s)
6.3.5 Our Experimental Results, Interpretation, and Discussion
6.3.6 Implementation Details
6.3.7 Snapshots of Interfaces
6.3.8 Performance Evaluation
6.3.9 Comparison with Other Algorithms
6.4 Novelty in Our Work
6.5 Future Scope, Possible Applications, and Limitations
6.6 Recommendations and Consideration
6.7 Conclusions
References
Chapter 7 Deep Learning Approach for Traffic Sign Recognition on Embedded Systems
7.1 Introduction
7.2 Literature Review
7.3 General Challenges
7.4 Proposed Solution
7.4.1 Hardware
7.5 Models
7.5.1 YOLOV3
7.5.2 Tiny-YOLOV3
7.5.3 Darknet Reference Model
7.6 Flowcharts
7.7 Key Features of the System
7.8 Technology Stack
7.9 Dataset
7.9.1 Labeling/Annotating the Dataset
7.10 Training the Model
7.11 Result
7.12 Future Scope
References
Chapter 8 Lung Cancer Risk Stratification Using ML and AI on Sensor-Based IoT: An Increasing Technological Trend for Health of Humanity
8.1 Introduction
8.1.1 Motivation to the Study
8.1.2 Problem Statements
8.1.3 Authors’ Contributions
8.1.4 Research Manuscript Organization
8.1.5 Definitions
8.1.6 Computer-aided Diagnosis System (CADe or CADx)
8.1.7 Sensors for the Internet of Things
8.1.8 Wireless and Wearable Sensors for Health Informatics
8.1.9 Remote Human’s Health and Activity Monitoring
8.1.10 Decision-Making Systems for Sensor Data
8.1.11 Artificial Intelligence (AI) and Machine Learning (ML) for Health Informatics
8.1.12 Health Sensor Data Management
8.1.13 Multimodal Data Fusion for Healthcare
8.1.14 Heterogeneous Data Fusion and Context-Aware Systems—a Context-Aware Data Fusion Approach for Health-IoT
8.2 Literature Review
8.3 Proposed Systems
8.3.1 Framework or Architecture of the Work
8.3.2 Model Steps and Parameters
8.3.3 Discussions
8.4 Experimental Results and Analysis
8.4.1 Tissue Characterization and Risk Stratification
8.4.2 Samples of Cancer Data and Analysis
8.5 Novelties
8.6 Future Scope, Limitations, and Possible Applications
8.7 Recommendations and Considerations
8.8 Conclusions
References
Chapter 9 Statistical Feedback Evaluation System
9.1 Introduction
9.2 Related Work
9.3 Types of Feedback Evaluation Systems
9.3.1 Questionnaire-Based Feedback Evaluation System (QBFES)
9.3.2 Star-Point-based Feedback Evaluation System (SBFES)
9.3.3 Text-Based Feedback Evaluation System (TBFES)
9.4 Statistical Feedback Evaluation System
9.4.1 Aspect Extraction
9.4.1.1 Feedback Collector
9.4.1.2 Feedback Preprocessor
9.4.1.3 Aspect Validator
9.4.2 Aspect Weight Estimation
9.4.3 Sentiment Evaluation
9.4.3.1 Sentiment Estimator
9.4.3.2 Sentiment Aggregator
9.4.4 Customized Evaluation
9.4.5 Aspect-Based Questionnaire Design
9.5 Result Analysis and Discussion
9.6 Conclusion
9.7 Future Work
References
Chapter 10 Emission of Herbal Woods to Deal with Pollution and Diseases: Pandemic-Based Threats
10.1 Introduction
10.1.1 Scenario of Pollution and Need to Connect with Indian Culture
10.1.2 Global Pollution Scenario
10.1.3 Indian Crisis on Pollution and Worrying Stats
10.1.4 Efforts Made to Curb Pollution World Wide
10.1.5 Indian Ancient Vedic Sciences to Curb Pollution and Related Diseases
10.1.6 The Yajna Science: A Boon to Human Race from Rishis and Munis
10.1.7 The Science of Mantra Associated with Yajna and Its Scientific Effects
10.1.8 Effect of Different Woods and Cow Dung Used in Yajna
10.1.9 Use of Sensors and IoT to Record Experimental Data
10.1.10 Analysis and Pattern Recognition by ML and AI
10.2 Literature Survey
10.2.1 Gist
10.2.2 Methodology Used in This Paper
10.2.3 Instruments and Data Set Used
10.2.4 The Future Scope Discussed
10.3 The Methodology and Protocols Followed
10.4 Experimental Setup of an Experiment
10.4.1 Airveda and Different Sensor-Based Instruments
10.5 Results and Discussions
10.5.1 Mango v/s Banyan (Bargad)
10.5.1.1 Mango
10.5.1.2 Bargad
10.6 Applications of Yagya and Mantra Therapy in Pollution Control and its Significance
10.7 Future Research Perspectives
10.8 Novelty of Our Research
10.9 Recommendations
10.10 Conclusions
References
Chapter 11 Artificial Neural Networks: A Comprehensive Review
11.1 Introduction
11.2 Activation Function
11.2.1 Linear Activation Function
11.2.2 Nonlinear Activation Function
11.2.2.1 Sigmoid (Logistic) Function
11.2.2.2 Tanh Activation Function
11.2.2.3 Rectified Linear Unit (ReLU) Function
11.3 Artificial Neural Network (ANN)
11.3.1 Supervised Learning
11.3.2 Unsupervised Learning
11.3.3 Reinforcement Learning
11.4 Types of Artificial Neural Network
11.4.1 Single-Layer Feedforward Neural Network
11.4.2 Multilayer Feedforward Neural Networks
11.4.3 Recursive Neural Network (RNN)
11.4.4 Convolutional Layer Network (CNN)
11.4.5 Backpropagation Neural Network
11.4.5.1 Static Backpropagation
11.4.5.2 Recurrent Backpropagation
11.5 Problems in Artificial Neural Networks
11.5.1 Techniques to Avoid Overfitting When Neural Networks are Trained
11.6 Convergence of Neural Network
11.6.1 Adaptive Convergence (or Just Convergence)
11.6.2 Reactive Convergence
11.7 Key Features of the Error Surface
11.7.1 Local Minima
11.7.2 Flat Regions (Saddle Points)
11.7.3 High-Dimensional
11.8 Application of Artificial Neural Network
11.9 Conclusion
References
Chapter 12 A Case Study on Machine Learning to Predict the Students’ Result in Higher Education
12.1 Introduction
12.1.1 Literature Review
12.2 Proposed Model
12.2.1 Participants and Datasets
12.2.2 Data Retrieval
12.2.3 Data Preprocessing
12.3 Result and Discussion
12.3.1 Model Evaluation Metrics
12.3.2 Decision Tree Classification
12.3.3 KNN Classification
12.3.4 Random Forest Tree Classification
12.3.5 X-Gradient Boosting Tree Classification
12.4 Comparative Results for Different Classification Models
12.5 Conclusion and Future Scope
References
Chapter 13 Data Analytic Approach for Assessment Status of Awareness of Tuberculosis in Nigeria
13.1 Introduction
13.2 Related Works
13.3 Materials and Methods
13.3.1 Population and Sample
13.3.2 Tools and Designing
13.3.3 Task Procedures
13.3.4 Data Analysis and Results
13.4 Results and Discussion
13.5 Conclusions
Acknowledgements
References
Chapter 14 Active Learning from an Imbalanced Dataset: A Study Conducted on the Depression, Anxiety, and Stress Dataset
14.1 Introduction
14.2 Literature Survey
14.3 Problem Statement
14.4 Necessity of Defining the Problem/Research Gap
14.5 Objectives
14.5.1 Primary Objective
14.5.2 Secondary Objective
14.6 Dataset
14.6.1 Data Collection
14.6.2 Data Description
14.6.3 Data Preprocessing
14.6.4 Exploratory Data Analysis
14.6.4.1 Analysis of DASS
14.6.4.2 Analysis of the TIPI Test
14.6.4.3 Analysis of Time Taken by the Users to Complete the Survey
14.6.4.4 Analysis of the Validity-Check List and their Relationship with the Education Information
14.7 Implementation Design
14.7.1 Class Imbalance
14.7.2 SMOTE
14.7.3 Model Building
14.7.4 Evaluation Metric
14.8 Results and Conclusion
References
Chapter 15 Classification of the Magnetic Resonance Imaging of the Brain Tumor Using the Residual Neural Network Framework
15.1 Introduction
15.2 Literature Review
15.3 Architecture of Resnet Medical Imaging Modalities
15.4 Stages for Implementation of the Resnet Framework
15.4.1 Preprocessing
15.4.2 Training the Network
15.4.3 Segmentation
15.4.4 Focal Loss Function
15.5 Results and Discussions
15.6 Conclusions and Future Scope
References
Index
Recommend Papers

Handbook of Machine Learning for Computational Optimization: Applications and Case Studies (Demystifying Technologies for Computational Excellence) [1 ed.]
 0367685426, 9780367685423

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Handbook of Machine Learning for Computational Optimization

Demystifying Technologies for Computational Excellence: Moving Towards Society 5.0 Series Editors Vikram Bali and Vishal Bhatnagar This series encompasses research work in the feld of Data Science, Edge Computing, Deep Learning, Distributed Ledger Technology, Extended Reality, Quantum Computing, Artifcial Intelligence, and various other related areas, such as natural language processing and technologies, high-level computer vision, cognitive robotics, automated reasoning, multivalent systems, symbolic learning theories and practice, knowledge representation and the semantic web, intelligent tutoring systems, AI, and education. The prime reason for developing and growing out this new book series is to focus on the latest technological advancements – their impact on the society, the challenges faced in implementation, and the drawbacks or reverse impact on the society due to technological innovations. With the technological advancements, every individual has personalized access to all the services, all devices connected with each other communicating among themselves, thanks to the technology for making our life simpler and easier. These aspects will help us to overcome the drawbacks of the existing systems and help in building new systems with latest technologies that will help the society in various ways, proving Society 5.0 as one of the biggest revolutions in this era.

Industry 4.0, AI, and Data Science Research Trends and Challenges Edited by Vikram Bali, Kakoli Banerjee, Narendra Kumar, Sanjay Gour, and Sunil Kumar Chawla

Handbook of Machine Learning for Computational Optimization Applications and Case Studies Edited by Vishal Jain, Sapna Juneja, Abhinav Juneja, and Ramani Kannan

Data Science and Innovations for Intelligent Systems Computational Excellence and Society 5.0 Edited by Kavita Taneja, Harmunish Taneja, Kuldeep Kumar, Arvind Selwal, and Ouh Lieh

Artifcial Intelligence, Machine Learning, and Data Science Technologies Future Impact and Well-Being for Society 5.0 Edited by Neeraj Mohan, Ruchi Singla, Priyanka Kaushal, and Seifedine Kadry For more information on this series, please visit: https://www.routledge.com/ Demystifying-Technologies-for-Computational-Excellence-Moving-TowardsSociety-5.0/book-series/CRCDTCEMTS

Handbook of Machine Learning for Computational Optimization Applications and Case Studies

Edited by

Vishal Jain, Sapna Juneja, Abhinav Juneja, and Ramani Kannan

First edition published 2022 by CRC Press 6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742 and by CRC Press 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN © 2022 selection and editorial matter, Vishal Jain, Sapna Juneja, Abhinav Juneja, and Ramani Kannan; individual chapters, the contributors CRC Press is an imprint of Taylor & Francis Group, LLC Reasonable eforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. Te authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microflming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, access www.copyright. com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are not available on CCC please contact mpkbookspermissions@ tandf.co.uk Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for identifcation and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Names: Jain, Vishal, 1983- editor. Title: Handbook of machine learning for computational optimization : applications and case studies / Vishal Jain, Sapna Juneja, Abhinav Juneja, Ramani Kannan. Description: Boca Raton : CRC Press, 2021. | Series: Demystifying technologies for computational excellence | Includes bibliographical references and index. Identifers: LCCN 2021017098 (print) | LCCN 2021017099 (ebook) | ISBN 9780367685423 (hardback) | ISBN 9780367685454 (paperback) | ISBN 9781003138020 (ebook) Subjects: LCSH: Machine learning—Industrial applications. | Mathematical optimization—Data processing. | Artifcial intelligence. Classifcation: LCC Q325.5 .H295 2021 (print) | LCC Q325.5 (ebook) | DDC 006.3/1—dc23 LC record available at https://lccn.loc.gov/2021017098 LC ebook record available at https://lccn.loc.gov/2021017099 ISBN: 978-0-367-68542-3 (hbk) ISBN: 978-0-367-68545-4 (pbk) ISBN: 978-1-003-13802-0 (ebk) DOI: 10.1201/9781003138020 Typeset in Times by codeMantra

Contents Preface...................................................................................................................... vii Editors....................................................................................................................... xi Contributors ............................................................................................................xiii Chapter 1

Random Variables in Machine Learning ............................................. 1 Piratla Srihari

Chapter 2

Analysis of EMG Signals using Extreme Learning Machine with Nature Inspired Feature Selection Techniques .......................... 27 A. Anitha and A. Bakiya

Chapter 3

Detection of Breast Cancer by Using Various Machine Learning and Deep Learning Algorithms ..........................................................51 Yogesh Jadhav and Harsh Mathur

Chapter 4

Assessing the Radial Effciency Performance of Bus Transport Sector Using Data Envelopment Analysis...........................................71 Swati Goyal, Shivi Agarwal, Trilok Mathur, and Nirbhay Mathur

Chapter 5

Weight-Based Codes—A Binary Error Control Coding Scheme—A Machine Learning Approach......................................... 89 Piratla Srihari

Chapter 6

Massive Data Classifcation of Brain Tumors Using DNN: Opportunity in Medical Healthcare 4.0 through Sensors .................. 95 Rohit Rastogi, Akshit Rajan Rastogi, D.K. Chaturvedi, Sheelu Sagar, and Neeti Tandon

Chapter 7

Deep Learning Approach for Traffc Sign Recognition on Embedded Systems ...........................................................................113 A. Shivankit, Gurminder Kaur, Sapna Juneja, and Abhinav Juneja

v

vi

Chapter 8

Contents

Lung Cancer Risk Stratifcation Using ML and AI on SensorBased IoT: An Increasing Technological Trend for Health of Humanity ...........................................................................................137 Rohit Rastogi, Mukund Rastogi, D.K. Chaturvedi, Sheelu Sagar, and Neeti Tandon

Chapter 9

Statistical Feedback Evaluation System ............................................153 Alok Kumar and Renu Jain

Chapter 10 Emission of Herbal Woods to Deal with Pollution and Diseases: Pandemic-Based Threats ...................................................................183 Rohit Rastogi, Mamta Saxena, D. K. Chaturvedi, and Sheelu Sagar Chapter 11 Artifcial Neural Networks: A Comprehensive Review................... 203 Neelam Nehra, Pardeep Sangwan, and Divya Kumar Chapter 12 A Case Study on Machine Learning to Predict the Students’ Result in Higher Education .............................................................. 229 Tejashree U. Sawant and Urmila R. Pol Chapter 13 Data Analytic Approach for Assessment Status of Awareness of Tuberculosis in Nigeria..................................................................... 243 Ishola Dada Muraina, Rafeeah Rufai Madaki, and Aisha Umar Suleiman Chapter 14 Active Learning from an Imbalanced Dataset: A Study Conducted on the Depression, Anxiety, and Stress Dataset .............251 Umme Salma M. and Amala Ann K. A. Chapter 15 Classifcation of the Magnetic Resonance Imaging of the Brain Tumor Using the Residual Neural Network Framework ...................267 Tina and Sanjay Kumar Dubey Index...................................................................................................................... 279

Preface Machine learning is a trusted technology over decades and has fourished on a global scale touching the lives of each one of us. The modern-day decision making and processes are all dependent on machine learning technology to make matured short-term and long-term decisions. Machine learning is blessed to have phenomenal support from the research community, and have landmark contributions, which is enabling machine learning to fnd new applications every day. The dependency of human processes on machine learning-driven systems is encompassing all spheres of current state-of-the-art systems with the level of reliability it offers. There is a huge potential in this domain to make the best use of machines in order to ensure the optimal prediction, execution, and decision making. Although machine learning is not a new feld, it has evolved with ages and the research community round the globe have made remarkable contribution for the growth and trust of applications to incorporate it. The predictive and futuristic approach, which is associated with machine learning, makes it a promising tool for business processes as a sustainable solution. There is an ample scope in the technology to propose and devise newer algorithms, which are more effcient and reliable to give machine learning an entirely new dimension in discovering certain latent domains of applications, it may support. This book will look forward to addressing the issues, which can resolve the modern-day computational bottom lines which need smarter and optimal machine learning-based intervention to make processes even more effcient. This book presents innovative and improvised machine learning techniques which can complement, enrich, and optimize the existing glossary of machine learning methods. This book also has contributions focusing on the application-based innovative optimized machine learning solutions, which will give the readers a vision of how innovation using machine learning may aid in the optimization of human and business processes. We have tried to knit this book as a read for all books wherein the learners and researchers shall get insights about the possible dimensions to explore in their specifc areas of interest. The chapter-wise description is as follows: Chapter 1 explores the basic concepts of random variables (single and multiple), their role and applications in the specifed areas of machine learning. Chapter 2 demonstrates Wigner-Ville transformation technique to extract the time-frequency domain features from typical and atypical EMG signals – myopathy (muscle disorder) and amyotrophic lateral sclerosis (neuro disorder). Nature inspired feature selection algorithms, whale optimization algorithm (WOA), genetic algorithm (GA), bat algorithm (BA), fre-fy optimization (FA), and particle swarm optimization (PSO) are utilized to determine the relevant features from the constructed features. Chapter 3 presents various algorithms of machine learning (ML), which can be used for breast cancer detection. Since these techniques are commonly used in many areas, they are also used for making decisions regarding diagnostic diagnosis and clinical studies. vii

viii

Preface

Chapter 4 measures the effciency and thoroughly explores the scope for optimal utilization of the input resources owned by depots of the RSRTC. The new slack model (NSM) of DEA is used as it enumerates the slacks for input and output variables. The model satisfes the radial properties, unit invariance, and translation invariance. This study enables policy-makers to evaluate inputs for consistent output up to the optimum level and improve the performance of the ineffcient depots. Chapter 5 presents a binary error control coding scheme using weight-based codes. This method is quite used for classifcation and employs the K nearest neighbor algorithms. The paper also discussed the role of distance matrix with hamming code evaluation. Chapter 6 exhibits MRI images of the framed brain to create deep neural system models that can be isolated between different types of heart tumors. To perform this task, deep learning is used. It is a type of instrument-based learning where the lower levels responsible for many types of higher-level defnitions appear above the different levels of the screen. Chapter 7 focuses on creating an affordable and effective warning system for drivers that is able to detect the warning sign boards and speed limits in front of the moving vehicle, and prompt the driver to lower to safer speeds if required. The software internally works on a deep learning-based modern neural network YOLO (You Only Look Once) with certain modifcations, which allows it to detect the road signs really quickly and accurately on low-powered ARM CPUs. Chapter 8 presents an approach for the classifcation of lung cancer based on the associated risks (high risk, low risk, high risk). The study was conducted using a lung cancer classifcation scheme by studying micrographs and classifying them into a deep neural network using machine learning (ML) framework. Chapter 9 presents a statistical feedback evaluation system that allows to design an effective questionnaire using statistical knowledge of the text. In this questionnaire, questions and their weight are not pre-decided. It is established that questionnairebased feedback systems are traditional and quite straightforward, but these systems are very static and restrictive. The proposed statistical feedback evaluation system is helpful to the users and manufacturers in fnding the appropriate item as per their choices. Chapter 10 presents an experimental work based on the data collected on various parameters on the scientifc measuring analytical software tools Air Veda instrument and IoT-based sensors capturing the humidity and temperature data from atmospheric air in certain interval of time to know the patterns of pollution increment or decrement in atmosphere of nearby area. Chapter 11 concerns with neural network representations and defning suitable problems for neural network learning. It covers numerous substitute designs for the primitive units making up an artifcial neural network, such as perceptron units, sigmoid unit, and linear units. This chapter also covers the learning algorithms for training single units. Backpropagation algorithm for multilayer perceptron training is described in detail. Also, the general issues such as the representational capabilities of ANNs, overftting problems, and substitutes to the backpropagation algorithm are also explained.

Preface

ix

Chapter 12 proposes a system which will make use of the machine learning approach to predict a student’s performance. Based on student’s current performance and some measurable past attributes, the end result can be predicted to classify them among good or bad performers. The predictive models will make students aware who are likely to struggle during the fnal examinations. Chapter 13 presents a study that assists in assessing the awareness status of people on the TB towards its mitigation and serves as contribution to the feld of health informatics. Indeed, the majority of participants claimed that they had low awareness on the TB and its associated issues in their communities. Though, the participants were from Kano state, a strategic location in the northern part of Nigeria, which means that the result of the experiment can represent major opinions of northern residents. Chapter 14 deals with psychological data related to depression, anxiety, and stress data to study how the classifcation and analysis is carried out on imbalanced data. The proposed work not only contributes on providing practical information about the balancing techniques like SMOTE, but also reveals the strategy for dealing with working of many existing classifcation algorithms like SVM, Random Forest, XGBoost etc. on imbalanced dataset. Chapter 15 proposes the construction of segmented mask of MRI (magnetic resonance image) using CNN approach with the implementation of ResNet framework. The understanding of ResNet framework using layered approach will provide the extensive anatomical information of higher-dimensional image for precise clinical analysis for curative treatment of patients.

Editors Vishal Jain is an Associate Professor in the Department of CSE at Sharda University, Greater Noida, India. He has earlier worked with Bharati Vidyapeeth’s Institute of Computer Applications and Management (BVICAM), New Delhi, India (affliated with Guru Gobind Singh Indraprastha University, and accredited by the All India Council for Technical Education). He frst joined BVICAM as an Assistant Professor. Before that, he has worked for several years at the Guru Premsukh Memorial College of Engineering, Delhi, India. He has more than 350 research citation indices with Google scholar (h-index score 9 and i-10 index 9). He has authored more than 70 research papers in reputed conferences and journals, including Web of Science and Scopus. He has authored and edited more than 10 books with various reputed publishers, including Springer, Apple Academic Press, Scrivener, Emerald, and IGIGlobal. His research areas include information retrieval, semantic web, ontology engineering, data mining, ad hoc networks, and sensor networks. He was recipient of a Young Active Member Award for the year 2012–2013 from the Computer Society of India, Best Faculty Award for the year 2017, and Best Researcher Award for the year 2019 from BVICAM, New Delhi. Sapna Juneja is a Professor in IMS, Ghaziabad, India. Earlier, she has worked as a Professor in the Department of CSE at IITM Group of Institutions and BMIET, Sonepat. She has more than 16 years of teaching experience. She completed her doctorate and masters in Computer Science and Engineering from M.D. University, Rohtak, in 2018 and 2010, respectively. Her broad area of research is Software Reliability of Embedded System. Her areas of interest include Software Engineering, Computer Networks, Operating System, Database Management Systems, and Artifcial Intelligence etc. She has guided several research theses of UG and PG students in Computer Science and Engineering. She is editing book on recent technological developments. Abhinav Juneja is currently working as a Professor in the Department of IT at KIET Group of Institutions, Delhi-NCR, Ghaziabad, India. Earlier, he has worked as an Associate Director and a Professor in the Department of CSE at BMIET, Sonepat. He has more than 19 years of teaching experience for postgraduate and undergraduate engineering students. He completed his doctorate in Computer Science and Engineering from M.D. University, Rohtak, in 2018 and has done masters in Information Technology from GGSIPU, Delhi. He has research interests in the feld of Software Reliability, IoT, Machine Learning, and Soft Computing. He has published several papers in reputed national and international journals. He has been a reviewer of several journals of repute and has been in various committees of international conferences.

xi

xii

Editors

Ramani Kannan is currently working as a Senior Lecturer, Center for Smart Grid Energy Research, Institute of Autonomous system, University Teknologi PETRONAS (UTP), Malaysia. Dr. Kannan completed Ph.D. (Power Electronics and Drives) from Anna University, India, in 2012; M.E. (Power Electronics and Drives) from Anna University, India, in 2006; B.E. (Electronics and Communication) from Bharathiyar University, India, in 2004. He has more than 15 years of experience in prestigious educational institutes. Dr. Kannan has published more than 130 papers in various reputed national and international journals and conferences. He is the editor, co-editor, guest editor, and reviewer of various books, including Springer Nature, Elsevier etc. He has received award for best presenter in CENCON 2019, IEEE Conference on Energy Conversion (CENCON 2019), Indonesia.

Contributors Shivi Agarwal Department of Mathematics BITS Pilani Pilani, Rajasthan, India Amala Ann K. A. Data Science Department CHRIST (Deemed to be University) Bangalore, India A. Anitha D.G. Vaishnav College Chennai, India A. Bakiya MIT Campus, Anna University Chennai, India

Renu Jain University Institute of Engineering and Technology CSJM University Kanpur, Uttar Pradesh, India Abhinav Juneja KIET Group of Institutions Ghaziabad, Uttar Pradesh, India Sapna Juneja Department of Computer science IMS Engineering College Ghaziabad, Uttar Pradesh, India

D. K. Chaturvedi Department of Electrical Engineering DEI, Agra, India

Gurminder Kaur Department of Computer Science and Engineering BM Institute of Engineering & Technology Sonepat, India

Sanjay Kumar Dubey Department of Computer Science and Engineering Amity University Noida, Uttar Pradesh, India

Alok Kumar University Institute of Engineering and Technology CSJM University Kanpur, Uttar Pradesh, India

Ayushi Ghosh Maulana Abul Kalam Azad University of Technology Kolkata, West Bengal, India

Divya Kumar Department of ECE IFTMU Moradabad, Uttar Pradesh, India

Swati Goyal Department of Mathematics BITS Pilani Pilani, Rajasthan, India

Rafeeah Rufai Madaki Department of Computer Science Yusuf Maitama Sule University (Formerly, Northwest University) Kano, Nigeria

Yogesh Jadhav Research Scholar Madhyanchal Professional University Bhopal, Madhya Pradesh, India

Harsh Mathur Department of Computer Science Madhyanchal Professional University Bhopal, Madhya Pradesh, India xiii

xiv

Contributors

Trilok Mathur Department of Mathematics BITS Pilani Pilani, Rajasthan, India

Pardeep Sangwan Department of ECE MSIT Delhi, India

Nirbhay Mathur Department of Electrical & Electronics Universiti Teknologi PETRONAS Perak, Malaysia

Tejashree U. Sawant Department of Computer Science Shivaji University Kolhapur, Maharashtra, India

Ishola D. Muraina Department of Computer Science Yusuf Maitama Sule University (Formerly, Northwest University) Kano, Nigeria

Mamta Saxena Ministry of Statistics Govt. of India Delhi, India

Neelam Nehra Department of ECE MSIT Delhi, India Urmila R. Pol Department of Computer Science Shivaji University Kolhapur, Maharashtra, India Akshit Rajan Rastogi Department of CSE ABES Engg. College Ghaziabad, Uttar Pradesh, India Mukund Rastogi Department of CSE ABES Engg. College Ghaziabad, Uttar Pradesh, India Rohit Rastogi Department of CSE ABES Engg. College Ghaziabad, Uttar Pradesh, India Sheelu Sagar Amity International Business School Noida, Uttar Pradesh, India

A. Shivankit Department of CSE BM Institute of Engineering & Technology Sonepat, India Piratla Srihari Department of ECE Geethanjali College of Engineering and Technology Hyderabad, Telangana, India Aisha Umar Suleiman Department of ECE Yusuf Maitama Sule University (Formerly, Northwest University) Kano, Nigeria Neeti Tandon Research Scholar Vikram University Ujjain, Madhya Pradesh, India Tina Department of Computer Science Engineering Amity University Noida, Uttar Pradesh, India Umme Salma M. Department of Computer Science CHRIST (Deemed to be University) Bangalore, India

1

Random Variables in Machine Learning Piratla Srihari Geethanjali College of Engineering and Technology

CONTENTS 1.1 1.2

1.3

1.4

1.5 1.6

Introduction ......................................................................................................2 Random Variable ..............................................................................................3 1.2.1 Defnition and Classifcation.................................................................3 1.2.1.1 Applications in Machine Learning ........................................4 1.2.2 Describing a Random Variable in Terms of Probabilities....................4 1.2.2.1 Ambiguity with Reference to Continuous Random Variable ...................................................................5 1.2.3 Probability Density Function................................................................6 1.2.3.1 Properties of pdf ....................................................................6 1.2.3.2 Applications in Machine Learning ........................................7 Various Random Variables Used in Machine Learning...................................7 1.3.1 Continuous Random Variables .............................................................7 1.3.1.1 Uniform Random Variable.....................................................7 1.3.1.2 Gaussian (Normal) Random Variable....................................8 1.3.2 Discrete Random Variables ................................................................ 10 1.3.2.1 Bernoulli Random Variable ................................................. 10 1.3.2.2 Binomial Random Variable ................................................. 11 1.3.2.3 Poisson Random Variable .................................................... 12 Moments of Random Variable........................................................................ 13 1.4.1 Moments about Origin........................................................................ 13 1.4.1.1 Applications in Machine Learning ...................................... 13 1.4.2 Moments about Mean ......................................................................... 14 1.4.2.1 Applications in Machine Learning ...................................... 14 Standardized Random Variable...................................................................... 15 1.5.1 Applications in Machine Learning..................................................... 15 Multiple Random Variables ............................................................................ 16 1.6.1 Joint Random Variables...................................................................... 17 1.6.1.1 Joint Cumulative Distribution Function (Joint CDF)........... 17 1.6.1.2 Joint Probability Density Function (Joint pdf) .................... 17 1.6.1.3 Statistically Independent Random Variables ....................... 18 1.6.1.4 Density of Sum of Independent Random Variables............. 18 1.6.1.5 Central Limit Theorem ........................................................ 19

DOI: 10.1201/9781003138020-1

1

2

Handbook of Machine Learning

1.6.1.6 Joint Moments of Random Variables................................... 19 1.6.1.7 Conditional Probability and Conditional Density Function of Random Variables ............................................ 22 1.7 Transformation of Random Variables............................................................. 23 1.7.1 Applications in Machine Learning..................................................... 23 1.8 Conclusion ......................................................................................................24 References................................................................................................................24

1.1

INTRODUCTION

Predicting the future using the knowledge about the past is the fundamental objective of machine learning. In a digital communication system, a binary data generation scheme referred to as differential pulse code modulation (DPCM) works on the similar principle, where, based on the past behaviour of the signal, its future value will be predicted, using a predictor. A tapped delay line flter serves the purpose. More is the order of the predictor, better is the prediction, i.e. less is the prediction error.[1] Thus, machine learning, even though not being referred to by this name earlier, was/is an integral part of technical world. The same prediction error with reference to a DPCM system is now being addressed as confdence interval in connection with machine learning. Less prediction error implies a better prediction, and as far as machine learning is concerned, the probability of the predicted value to be within the tolerable limits of error (which is the confdence interval) should be large, which is a metric for the accuracy of prediction. The machine learning methodology involves the process of building a statistical model for a particular task, based on the knowledge of the past data. This collected past data with reference to a task is referred to as data set. This way of developing the models to predict about ‘what is going to happen’, based on the ‘happened’, is predictive modelling. In detective analysis also, ‘Happened’, i.e. past data, is used, but there is no necessity of predicting about ‘Going to happen’. For example, 30–35 years back, Reynolds-045 Pen ruled the market for a long time, specifcally in South India. Presently, the sales are not that much signifcant. If it is required to study the journey of the pen from past to present, detective analysis is to be performed, since there is no necessity of predicting its future sales. Similarly, a study of ‘Why the sales of a particular model of an automobile vehicle came down?’ also belongs to the same category. The data set referred above is used by the machine to learn, and hence, it is also referred to as training data set. After learning, the machine faces the test data. Using the knowledge the machine gained through learning, it should act on the test data to resolve the task assigned. In predictive modelling, if the learning mechanism of the machine is supervised by somebody, then the mode of learning is referred to as supervised learning. That supervising ‘somebody’ is the training data set, also referred to as labelled training data, where each labelled data element such as Di is mapped to a data element D0 . Such many pairs of elements are the learning resources for the machine, and are used

Random Variables in Machine Learning

3

to build the model, using which the machine predicts, i.e. this knowledge about the mapped will help the machine to map the test data pairs (input-output pair). It can be inferred about the supervised learning that there is a target variable which is to be predicted. Example: Based on the symptoms of a patient, it is to be predicted whether he/she is suffering from a particular disease. To enable this prediction, the past history or statistics such as patients with what symptoms (similar) were categorized under what disease. This past data (both symptoms and categorization) is the training data set that supervises the machine in its process of prediction. Here, the target variable is the disease of the patient, which is to be predicted. In unsupervised learning mechanism, the training data is considered to be unlabelled, i.e. only Di. Major functionality of unsupervised learning is pattern identifcation. Some of the tasks under unsupervised learning are: Clustering: Group all the people wearing white (near white) shirts. Density Estimation: If points are randomly distributed along an axis, the regions along the axis with minimum/moderate/maximum number points need to be estimated. It can be inferred about the unsupervised learning that there is no target variable which is to be predicted. With reference to previous example of patient with ill-health, it can be told that all the people with a particular symptom of ill-health need to be grouped; however, disease of the patient need not to be predicted, which is the target variable, with reference to supervised learning.

1.2

RANDOM VARIABLE

1.2.1 DEFINITION AND CLASSIFICATION For an experiment E to be performed, let S be the set of all possible outcomes of the experiment (sample space) and ξ be the outcome defned on S. The domain of X = f (˜ ) is S. The range of the function depends on the mapping between the outcomes of the experiment to a numerical value, specifcally real number. This X is referred to as random variable, and thus, the random variable is a function defned on S and is real-valued. Example: For the experiment E = ‘simultaneous throw of two dice’, S = {(1,1) ,(1,2 ) ,(1,6 ) ,( 2,1) ,(2,2)( 2,6 ) ,( 6,6 )}, where each number of each pair of this set indicates the face of the particular die. Each element of S is mapped to a real value by the function X = f (˜ ) = sum of the two faces Table 1.1 gives the pairs of all possible outcomes with the corresponding real values mapped.

4

Handbook of Machine Learning

TABLE 1.1 Sample Space and Mapped Values Pair in the Sample Space (1,1) (1,2), (2,1) (1,3),(2,2),(3,1) (1,4),(2,3),(3,2),(4,1) (1,5),(2,4),(3,3),(4,2),(5,1) (1,6),(2,5),(3,4),(4,3),(5,2),(6,1) (2,6),(3,5),(4,4),(5,3),(6,2) (3,6),(4,5),(5,4),(6,3) (4,6),(5,5),(6,4) (5,6),(6,5) (6,6)

Real Value 2 3 4 5 6 7 8 9 10 11 12

This X is referred to as random variable taking all the real values as mentioned. Thus, a random variable can be considered as a rule by which a real value is assigned to each outcome of the experiment. If X = f (˜ ) possesses countably infnite range (points in the range are large in number, but can be counted), X is referred to as discrete random variable (categorical with reference to machine learning). On the other hand, if the range of the function is uncountably infnite (large in number, which can’t be counted), X is referred to as continuous random variable[2,3] (similar terminology is used with reference to machine learning also). 1.2.1.1 Applications in Machine Learning In the process of prediction, which is the major functionality for which the machine learning is used for, the variables involved are predictor variable and target variable. The very fundamental task of a machine learning methodology is to identify these variables with reference to the given task. A predictor is an independent variable, which is used for prediction, and target variable is that being predicted and is dependent. In machine learning terminology, variables are categorical (discrete in nature, e.g. number of people survived in an accident) and continuous (can have an infnite number of values between the maximum and the minimum, e.g. age of a person). These variables are nothing but the discrete and continuous random variables, and are task specifc. Identifcation of these variables is the primary stage of machine learning.[4]

1.2.2 DESCRIBING A RANDOM VARIABLE IN TERMS OF PROBABILITIES With reference to the above example, it can be stated that the value mapped (which is considered as the value taken by the random variable X) is not always 2 or 3 or 4 etc., but there is some certainty associated with the mapping, i.e. X will take a value of 7, 7 only under certain outcomes, with a certainty of . 36

5

Random Variables in Machine Learning

Thus, a defnition of a random variable is not complete simply by specifying its value, without its probabilistic description that deals with the probabilities that X takes on a specifc value or values. This description is done by the probability mass function (pmf), which assigns a probability will for each value of X. The probability for X = x (  value taken by   X ) is P ( X = x ) and will be assigned by the corresponding pmf.[2] Example: Consider the case of tossing of an unbiased coin and let this process of tossing be repeated till a head occurs for the frst time. X = Number of times of tossing the coin is the random variable. The corresponding sample space can be concluded as follows: Since the coin is a fair coin, the event of getting head (H) or tail (T) in each fip is 1 with equal likelihood, i.e. P ( H ) = P ( T ) = . 2 In the frst fip, if a head occurs, there will be no second toss. Then, X = 1 with 1 probability = . On the other hand, if it is a tail, then the user will go to the second 2 fip. If the outcome is a head in the second fip, there will be no third toss. Now, 1 X = −2 with a probability = . This is recurrent. 4 Table 1.2 expresses various values taken by X, with the corresponding probabilities. The function that assigns the probability for each value taken by X, i.e. the pmf, is n ˝ 1ˇ P ( X = n ) = ˆ  ,  n = 1,2,3 ˙ 2˘ The properties possessed by pmf are:

( i ) 0 ˝ P ( X = n ) ˝ 1 ( ii )

˜P ( X = n) = 1 n

1.2.2.1 Ambiguity with Reference to Continuous Random Variable Let a discrete random variable X takes values from a set of N values, i.e. {0, 1, 2,3,… N − 1}, where the values are with equal likelihood. The corresponding 1 1 pmf is P ( X = k ) = , k = 0,1,2,…( N − 1) and lim P ( X = k ) = lim = 0. N˝˙ N˝˙ N N Then, the random variable X is considered to be continuous and P ( X = k ) (where k is a value among N) is found to be zero. Thus, it can be concluded that the probability of a continuous random variable to take a specifed value is typically zero.[2] Under such condition, pmf is not suitable to describe a random variable.

TABLE 1.2 Probability Distribution xi ( value taken by   X ) Probability

1

2

3

4



1 2

1 4

1 8

1 16



6

Handbook of Machine Learning

1.2.2.1.1 Cumulative Distribution Function Instead of X taking a specifc value, consider the case of X to lie in a range, i.e.   X ˜ k , and the corresponding probability   P ( X ˛ k ) is referred to as cumulative distribution function (CDF).[1] Thus, CDF of a random variable X is FX ( x ) = P ( X ˝ x ), where ‘x’ is the value, and instead of pmf, this is used to explain a continuous random variable. Since CDF is also probability, it is bounded as 0 ˛ FX ( x ) ˛ 1. Properties of CDF are: • FX ( −˝ ) = 0, since the event X ˜ −˛ will never happen. • FX ( ˛ ) = 1, since the event X ˜ ° is always true. • FX ( x1 ) ˛ FX ( x 2 ) ,  if   x1 < x 2, since X ˜ x 2  is a super set of   X ˜ x1. Thus, CDF is a nondecreasing function of X P ( x1 < X ˝ x 2 ) = FX ( x2 ) − FX ( x1 )

1.2.3 PROBABILITY DENSITY FUNCTION Even though CDF is a substitute for pmf, to explain a continuous random variable, for all the random variables, it may not be in a closed form. ˇ k − m Example: For a Gaussian random variable Y, the CDF P (Y ˝ k ) = 1 − Q  , ˘ ˜  and the function Q (˜ ) cannot be expressed in a closed form. Here, m and σ are, respectively, the mean and standard deviation of X. Under such circumstances, probability density function (pdf) is an alternative tool to describe a random variable statistically. It is defned for a random variable X at x as P( x ˙ X < x + ˜ ) ˜ ˘0 ˜

f X ( x ) = lim

The certainty or the probability with which X is in the interval ( x ,  x + ˜ ) is P( x ° X < x + ˜ ), and the denominator δ is the width (length) of the interval. Thus, f X ( x ) is the probability normalized by the width of the interval and can be interpreted as probability divided by width. From the properties of CDF, P ( x ˝ X < x + ˜ ) = FX ( x + ˜ ) − FX ( x ). FX ( x + ˜ ) − FX ( x ) d Hence, f X ( x ) = lim = FX ( x ), i.e. change in CDF is referred ˜ ˇ0 ˜ dx to as pdf.[2] 1.2.3.1 Properties of pdf f (x) ˛ 0

˜

ˆ

−ˆ

f ( x ) dx = 1

FX ( x ) =

˜

x

−ˇ

f (° ) d °

7

Random Variables in Machine Learning

˜

b

a

f ( x ) dx = P ( a < X < b )

1.2.3.2 Applications in Machine Learning • Confdence interval is an estimate of a parameter computed based on the statistical observations of it. • This specifes a range of reasonable values of being estimated, and the accuracy of estimation is expressed in terms of confdence level. • With a given confdence interval, if the number of observations of a parameter ‘p’ is p1 ,  p2 , … pn, with the confdence level ɛ, it can be interpreted that the estimated value of ‘p’ lies in the given confdence interval with a probability  . • This is expressed as P(a < X < b) , where X is the parameter being estimated, ( a,  b ) is the confdence interval and P(a < X < b) is the confdence level, which can be computed from the probability density of the parameter being estimated. ˙ f ( x ) dx. • The confdence level P(˜ < X < ° ) can be computed as

˜

˝

• In evaluating a model based on the predicted probabilities (in connection with binary classifcation), one of the evaluation metrics is area under curve-receiver-operating characteristic (AUC-ROC) metric. • This ROC is also referred to as false-positive rate-true-positive rate curve, which can be obtained from the predicted properties, with reference to binary classifcation. • Once this distribution curve is known, area under that curve, i.e.

˜ f ( x ) dx, will be a measure of the accuracy of the predicted.

• Ideally, the area is assumed as 1 (which is the area enclosed by any valid density function). • More close to 1 is the area measured, i.e. much better is the performance of the predicted model.[4,5]

1.3 1.3.1

VARIOUS RANDOM VARIABLES USED IN MACHINE LEARNING CONTINUOUS RANDOM VARIABLES

1.3.1.1 Uniform Random Variable • The density function of a random variable X (continuous) uniform over ˘ 1 for  ° < x < ˜  − ˜ ° with constant density within (˜ ,  ° ) is f ( x ) =   0  elsewhere  its specifed limits is referred to as uniform random variable (continuous). • A uniform discrete X takes all the possible values between (˜ ,  ° ) with equal likelihood.

8

Handbook of Machine Learning

The physical signifcance of uniform density is that the random variable X can 1 lie in any interval within the limits (˜ , ° ) with the same probability, i.e. , ˜ −° for any confdence interval ( k ,  k + ˜ ) ,  where  ° < k < ˛ , the confidence level   1 . P (˜ < X < x + ° ) = ˛ −˜ • Any confdence level P(˜ 1 < X < °1 ) for the given confdence interval °1 1 . dx (˜ ,  ° ) can be obtained as ˛1 ° − ˛ • The CDF of uniformly distributed random variable X (continuous) is  0  for  x <  ˜   x −˜ FX ( x ) =    for   ˜ < x ˘ °  ° −˜  1  for  x > °  • A uniform random variable is said to be symmetric about its mean ˙ ˜ + ° ˘ [6] . ˇˆ = 2 

˜

1.3.1.1.1 Applications in Machine Learning • When there is no prior realization of the distribution of the variable being predicted, the variable is considered to lie anywhere in the interval under consideration with the same probability, i.e. the variable under consideration is treated as uniform. • In classifcation, decision tree learning is the most widely used algorithm. The objective of decision tree is to have pure node. The purity of a node and the information gain are related as: • More impure is the node, more is the information required to describe. • More is the information gain, more homogeneous or pure is the node. • Information gain = 1 − entropy • More is the entropy, less pure is the node, and vice versa. • A uniform random variable will have the maximum entropy. • For example, in a decision tree, in a node if there are two equiprobable classes, then the corresponding entropy is maximum and is the indication for more impure node.[5,6] 1.3.1.2 Gaussian (Normal) Random Variable • The density of normally distributed X denoted as N m, ˜ 2 is ˇ ( x − m )2  1 2 f (x) = exp  − , where ‘m’ and ‘σ ’ are the mean and 2˜ 2  2˙˜ 2 ˘ variance of X, respectively.

(

)

Random Variables in Machine Learning

FIGURE 1.1 Gaussian density function.

• This density is a bell-shaped curve, with a point of symmetry at x = m, and will have its maximum value at x = m (Figure 1.1). x − m • The CDF of normal variable X is FX ( x ) = 1 − Q ˘ˆ , where ˇ ˜  2  ˇ x  1 exp  −  dx, which is not having a closed form of solution. Q(k ) = 2˙ k ˘ 2 • The density curve is symmetrical about its mean, i.e. equal distribution about its mean. • If the distribution is more oriented to the right of its mean, it is said to be right-skewed. • Similarly, left-skewed distribution can also be identifed. • Generally, it is preferred to have zero coeffcient of skewness (it is a measure of symmetry of the given density function).[1]

˜

1.3.1.2.1 Applications in Machine Learning • Experimental observations in many case studies can be ftted with Gaussian density. • Marks of all the students in a class in a particular subject • Variations in the share price of a particular company • With reference to machine learning, to make more predictions, more data is to be added to the model. The following are the ways of adding the data: • Add external data • Use existing data more effectively

9

10

Handbook of Machine Learning

• Feature engineering refers to the technique of generating new features using existing features. No new data is added. • Feature pre-processing is one of the primary steps in feature engineering. This involves updating or transforming the existing features. This is referred to as feature transformation. • Feature transformation involves replacing a variable by some mathematical functions such as ‘log’, ‘square’, ‘square root’, ‘cube’, ‘cube root’ etc. • If any distribution is right-skewed or left-skewed, it is made normally distributed using nth root or log and nth power or exponential, respectively. • As per central limit theorem, the density of sum of ‘n’ number independent random variables approaches Gaussian density. This is the basis for assuming the channel is normally distributed with reference to a communication system, which facilitates the study of the noise performance of a communication system.[7]

1.3.2

DISCRETE RANDOM VARIABLES

1.3.2.1 Bernoulli Random Variable When an experiment with only two outcomes is repeated independently multiple number of times, such repetitions are referred to as Bernoulli trials. Example: The experiment can be • Single fip of a coin, where the possible outcomes are head and tail • Identifying about the survival of a person in an accident: survived or not The pmf of Bernoulli random variable is P ( X = m ) = ( p ) ( q ) , where m assumes only two values either 0 or 1. It can assume any one value at a time. p  and  q ( = 1 − p ) are the probability of success and failure, respectively. Success is the required whose probability is to be computed. For Example: when an unbiased coin is tossed, if it is required to compute the 1 1−1 1 ˝ 1ˇ ˝ 1ˇ probability of getting a head (represented as m = 1), then P ( X = 1) = ˆ  ˆ  = . ˙ 2˘ ˙ 2˘ 2 Here, success is getting a head. ˘0 for x < 0  for 0 ˙ x < 1[6] The CDF of Bernoulli random variable is FX ( x ) = q   p + q = 1 for x ˇ 1 m

1− m

1.3.2.1.1 Applications in Machine Learning • Classifcation is used to predict categorical variables through machine learning algorithms. • The test data is to be assigned to a category based on some classifcation criteria.

11

Random Variables in Machine Learning

• When the variable to be predicted is binary valued, i.e. the test data is to be categorized into any one of the two classes available, such classifcation is binary classifcation and the performance of such algorithms can be analysed using Bernoulli process.[5] 1.3.2.2 Binomial Random Variable When multiple number of independent Bernoulli trials are repeated, such sequence of trials is referred to as Bernoulli process. The output of a Bernoulli process is a binomial random variable/distribution. Example: Let a fair coin be thrown. Since the outcome can be either head or tail, it comes under Bernoulli trial. When the experiment is repeated for a number of times, such sequence of trials is Bernoulli process. When the experiment is performed ‘r’ times (each experiment being a Bernoulli trial), the probability of getting the success for ‘k’ times is given as k r−k p ( X = k ) = rck ( p ) ( q ) , where p and q are the probabilities of success and failure, respectively, such that p + q = 1. Here, X is the number of times of having the success in the experiment. Here, X is the binomial random variable, since the above probability is the coefr fcient of kth term in the binomial expansion of ( p + q ) . If it is required to fnd the probability for tail to occur four times, when a fair coin 4 6 ˝ 1ˇ ˝ 1ˇ is tossed ten times, then such probability is p ( X = 4 ) = 10 4 ˆ  ˆ  . ˙ 2˘ ˙ 2˘ The probability of having success for ‘k’ times in ‘r’ number of trials of the experk r−k ˝ for  k = 0,1,2,r ˆrck ( p ) ( q ) , and this iment in a random order is p ( X = k ) = ˙ otherwise ˆ0  ˇ is the pmf of X. m

The corresponding CDF is FX ( m ) = P ( X ˙ m ) =

˜r

ck

( p)k ( q )r−k .

k=0

Binomial distribution summarizes the number of successes in a series of Bernoulli experiments, with success probability = p. Bernoulli distribution is the binomial distribution with single trial.[8] 1.3.2.2.1 Multinoulli Distribution Similar to Bernoulli distribution used for binary classifcation, multinoulli distribution also deals with categorical variables. It is a generalization of Bernoulli distribution (binary classifcation) to a multiclass classifcation, where k is the number of classes. Example: In the case of throwing a die, let the sample space be S = {1,2,3,4,5,6}. Thus, the number of classes can be considered as 6. Thus, Bernoulli distribution is multinoulli distribution with the number of classes = 2.[8]

12

Handbook of Machine Learning

1.3.2.2.2 Multinomial Distribution Multiple number of independent multinoulli trials (e.g. throwing a die multiple number of times) follows multinomial distribution, which is a generalized binomial distribution for discrete (categorical) variables/experiments, where each experiment is with k number of outputs. Here, in n number of trials of an experiment, each experiment has k number of outcomes, which are with the probability of occurrence given as p1 , p2 ,… pk .[8] 1.3.2.2.3 Applications in Machine Learning All the algorithms in machine learning may not be able to deal with categorical variables. In feature pre-processing for categorical variables, to enable this handling, the categorical variables will be converted to numerical values and this process of conversion is referred to as variable encoding. Example: Consider a supermarket having a chain of outlets. Different outlets in a city are of different size and are graded as small size, medium size, and big size. A machine learning algorithm cannot deal with such categorical values, i.e. small, medium and big. In this example, it is the case of multiclass classifcation with k = 3. Table 1.3 represents the conversion of the above categorical values into numerical values. This process of converting the categorical variables into numeric values is referred to as one hot encoding and is an example of multinoulli distribution.[6] 1.3.2.3 Poisson Random Variable When a Bernoulli trial (with a binary outcome, i.e. success or failure) is repeated independently for multiple number of times (n), the probability of getting the success (p) for a defned number (m) of times (no restriction on the sequence/order of getting the success) will be dealt by binomial distribution. In the limit    n ˜ ° and p ˜ 0, i.e. probability of success is infnitesimal, binomial random variable can be approximated as Poisson random variable. Example: In digital data transmission, when a large number of data bits are being transmitted, the computation of the probability of bit error will be dealt by this random variable.

TABLE 1.3 Variable Encoding Outlet 1 2 3 4 5

Size

Small

Medium

Big

Big Big Medium Small Medium

0 0 0 1 0

0 0 1 0 1

1 1 0 0 0

13

Random Variables in Machine Learning

Its pmf is p ( X = m ) = e

−˜

( ˜ )m , where ˜ = np is a constant and the corresponding m!

x

CDF is FX ( x ) = p ( X ˆ x ) =

˜

e− °

k=0

( ° )k .[3] k!

1.4 MOMENTS OF RANDOM VARIABLE Moments of a random variable are also referred to as its statistical averages. A random variable can have two types of moments: moments about origin and moments about mean or central moments.

1.4.1

MOMENTS ABOUT ORIGIN

Expected value [E(x)] or mean (m) or average value or expectation of a random value is referred to as its frst moment about origin and is given as E ( X ) = xi   p ( xi ),

˜ i

where   p ( xi ) is the certainty with which the random variable X = xi and

˜

˙

−˙

f ( x ) dx ,

respectively, for discrete (categorical) and continuous cases. For a random variable, the second moment about origin is its mean square value 2 E ˜X ° ˛˝ . n [1] Its nth moment E ˜X ° ˛˝. 1.4.1.1 Applications in Machine Learning • The averages only are used in the study of random variable, as the certainty with which takes different values is not unique. • Linear models are used in regression, when the dependent and independent random variables are linearly related. • The preliminary modelling is referred to as benchmark model, where the mean of the variable will be the solution for the prediction problem. • Prediction of the relation between the experience of a person and the salary. • The initial linear modelling will be the benchmark model, taking the mean, i.e. average of all salaries, i.e. dependent variable as the solution for the model. • This may not be the accepted one, since the people with different experiences may have the same salary. • The model can be improved by introducing the curves of the form Y = mX + C, i.e. salary = m (Experience) + C, which is a linear model. • The values of ‘m’ and ‘C’ for which the model gives the best prediction can be obtained from the cost function, which is given as n

˜( s

^ i

Cost Function = Mean Square Error (MSE) =

− si )

i=1

n and si are the predicted and actual ith value, respectively.

2

, where si^

14

Handbook of Machine Learning

• This can be referred to as the second moment of the variable Si^ − si . • Better model results in lower value of MSE.[7,9]

1.4.2 MOMENTS ABOUT MEAN Let p ( X = 2 ) = p ( X = 6 ) = 0.5, where X is the random variable. Its frst moment 1 1 about the origin, i.e. mean, is m = xi p ( xi ) =  2 ˙ + 6 ˙ = 4. 2 2 i

˜

To fnd the average amount by which the values taken by X differ from its mean (the answer is 2), the frst moment about origin or the frst central moment 1 1 E [( X − m )] = ( xi − m ) p( xi ) is defned. But its value ( 2 − 4 ) + ( 6 − 4 ) = 0. 2 2 i Thus, the very purpose of defning the frst central moment is not served.

˜

To fnd the same for X, the second central moment E ˆ˙( X − m )2 ˘ˇ =

˜( x − m )

2

i

p ( xi )

i

2 1 2 1 is defned. Its value for the above X is ( 2 − 4 ) + ( 4 − 6 ) = 4. Its positive square 2 2 root is 2, which is the value required. Thus, E ˝˙( X − m )2 ˆˇ is an indication of the average amount of variation of the values taken by the random variable with reference to its mean, and hence, its variance (σ 2), and standard deviation (σ) is its positive square root. The third central moment of X is E ˝˙( X − m )3 ˆˇ and is referred to as its skew. The E ˙( X − m )3 ˇ˘ normalized skew, i.e. coeffcient of skewness, is given as ˆ and is a mea˜3 sure of symmetry of the density of X. A random variable with symmetric density about its mean will have zero coeffcient of skewness. If more values taken by the random variable are to the right of its mean, the corresponding density function is said to be right-skewed and the respective coeffcient of skewness will be positive (>0). Similarly, the left-skewed density function can also be specifed and the respective coeffcient of skewness will be negative (