Proceedings of Trends in Electronics and Health Informatics: TEHI 2022 9819919150, 9789819919154

This book includes selected peer-reviewed papers presented at the International Conference on Trends in Electronics and

200 52 14MB

English Pages 505 [506] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Organization
Preface
Contents
About the Editors
Artificial Intelligence and Soft Computing
Experimental Study of High-Frequency Drill String Vibrations Under Different Conditions
1 Introduction
2 Borehole Noise Recorder
3 Measurement Results
4 Conclusion
References
Flexible Systolic Hardware Architecture for Computing a Custom Lightweight CNN in CT Images Processing for Automated COVID-19 Diagnosis
1 Introduction
2 SARS-COV-2 CT Scan Dataset
3 Convolutional Neural Networks (CNNs)
4 Systolic Array
5 Proposed CNN for COVID-19 or non-COVID-19 Classification
6 Proposed Hardware Architecture
6.1 General Description of the Processing Sequence
6.2 Detailed Description of the Key Architecture Modules
7 Experimentation and Results
8 Conclusions
References
Dimensionality Reduction in Handwritten Digit Recognition
1 Introduction
2 Literature Review
3 Methodology
3.1 Dataset Preprocessing
3.2 CNN Architecture
4 Dataset
5 Result Analysis
5.1 Performance Evaluation
5.2 Result Analysis with Other Datasets and Existing Work
6 Conclusion
References
Obtaining Fractal Dimension for Gene Expression Time Series Using an Artificial Neural Network
1 Introduction
2 Materials and Methods
2.1 Protein Concentration Time Series
3 Results
4 Conclusions
References
Grouping by Mixture of Normals for Breast Cancer in Two Groups, Benign and Malignant
1 Introduction
2 Data Analysis
2.1 EBM Algorithm
2.2 Gaussian Mixture Model
2.3 Initial Values
2.4 Stop Criterion
3 Implementation of the Algorithm
3.1 Calculation of the Standard Error through the Bootstrap Method
4 Conclusions
References
A Smart Automation System for Controlling Environmental Parameters of Poultry Farms to Increase Poultry Production
1 Introduction
2 Related Work
2.1 Environmental States in Poultry Farms
2.2 Best Selection of Environmental Parameters for Broilers
3 System Design and Descriptions
3.1 IDE and Microcontroller
3.2 Sensors and Peripherals
3.3 Hardware and Software Setup
3.4 Data Acquisition
4 Result and Discussion
4.1 Proposed Smart System Performance Versus Conventional System
4.2 Measurement of the Performance Efficiency of Broilers in Both Systems Livability
5 Conclusions
References
A New Model Evaluation Framework for Tamil Handwritten Character Recognition
1 Introduction
2 Review of Recent Literature
3 Proposed Methodology
3.1 A Nine-Layered Convolutional Neural Network for Tamil HWCR
3.2 Transfer Learning: The VGG16 Model
4 Experimentation
4.1 The Tamil Benchmark Datasets
4.2 Performance Metrics
4.3 Implementation
5 Results and Analysis
6 Conclusion
References
Integrated Linear Regression and Random Forest Framework for E-Commerce Price Prediction of Pre-owned Vehicle
1 Introduction
2 Related Work
3 Research Gap
4 Problem Formulation and Methodology
4.1 Machine Learning
4.2 Process Flow
5 Model Formulation and Result Matrix
6 Conclusion and Discussion
References
Personalized Recommender System for House Selection
1 Introduction
1.1 Challenges with a House Selection
1.2 The Need of Recommender System for House Selection
2 Literature Survey
3 Recommender System Using MCDM Method
3.1 TOPSIS (the Technique for an Order of Preference by Similarity to Ideal Solution)
4 Methodology
5 Conclusion
References
Healthcare Informatics
Epileptic Seizure Detection from EEG Signal Using ANN-LSTM Model
1 Introduction
2 Literature Review
3 Methodology
3.1 Dataset Collection and Description
3.2 Data Preprocessing
3.3 Artificial Neural Network (ANN)
3.4 Long Short-Term Memory (LSTM)
3.5 Proposed ANN-LSTM Model
3.6 Assessment Metrics
4 Results and Discussions
4.1 Experimental Setup
4.2 Result Analysis
5 Conclusion
References
Cognitive Assessment and Trading Performance Correlations
1 Introduction
2 Materials and Methods
2.1 Participants
2.2 Apparatus
2.3 Procedure
2.4 Measures
2.5 Statistical Analysis
3 Results
3.1 3D-MOT and Trading Scores Correlations
4 Discussion
5 Conclusions
References
Molecular Docking Study of Oxido-Vanadium Complexes with Proteins Involved in Breast Cancer
1 Introduction
2 Methodology
3 Results
4 Conclusion
References
Multi-Level Stress Detection using Ensemble Filter-based Feature Selection Method
1 Introduction
2 Related Work
3 Dataset Used
4 Proposed Methodology
4.1 Data Pre-processing
4.2 Feature Extraction
4.3 Feature Selection
4.4 Proposed Ensemble-Based Feature Selection Model
4.5 Classification
5 Results and Discussions
6 Conclusion
References
A Hybrid Transfer Learning and Segmentation Approach for the Detection of Acute Lymphoblastic Leukemia
1 Introduction
2 Related Works
2.1 Traditional Machine Learning and Image Processing Methods
2.2 Deep Learning and Hybrid Approaches
3 Methodology
3.1 Data Acquisition
3.2 Object Detection Dataset
3.3 Data Pre-processing
3.4 Segmentation Pipeline
3.5 ALL Object Detection Methodology Overview
4 Results
4.1 Performance and Discussion
4.2 ALL-Detection Results and GUI
5 Conclusion
References
Logistic Regression Approach to a Joint Classification and Feature Selection in Lung Cancer Screening Using CPRD Data
1 Introduction
2 Methods
2.1 Feature Engineering
2.2 Imbalanced Classification
2.3 Exploratory Data Analysis
2.4 Classification
2.5 Feature Selection
3 Experiment and Results
3.1 Exploratory Data Analysis
3.2 Classification and Feature Selection Analysis
4 Discussion and Conclusion
References
HI Applications for ADHD Children: A Case for Enhanced Visual Representations Using Novel and Adapted Guidelines
1 Introduction
2 Methodology
2.1 Review of the Literature
2.2 Interviewing and Surveying Method
2.3 Administering Design Task Method
2.4 Analyzing and Reporting
3 Reviewing of Literature
4 Interviewing and Surveying
5 Administering Design Tasks
6 Analyzing and Reporting
7 Conclusion
References
IoT and Data Analytics
Trimmed-TDL-Based Time-to-Digital Converter for Time-of-Flight Applications Implemented on Cyclone V FPGA
1 Introduction
2 Hardware Architecture
2.1 Encoder
3 Experimental Results
4 Conclusion
References
CO2 Monitoring System to Warn of Possible Risk of Spread of COVID-19 in Classrooms
1 Introduction
2 Purpose and Focus of the Project in Combating COVID-19 Infections Indoor
3 State of the Art
4 Development
4.1 Embedded System
5 MQTT: The Messaging and Data Exchange Protocol of the IoT
6 Reception, Storage, and Visualization of the Data Obtained
7 Tests and Results
8 Audio Notifications
9 Device to Reduce CO2 levels
10 Obtained Data Table
11 Conclusions and Future Work
References
High-Power Analysis for Outage Probability and Average Symbol Error Probability over Non-identical κ-µ Double Shadowed Fading
1 Introduction
2 System Model
2.1 Origin PDF with MR Combining Diversity
2.2 Origin PDF with EG Combining Diversity
2.3 Origin PDF with Selection Combining Diversity
3 Digital Communication System Performance Metrics
3.1 Outage Probability
3.2 Average SEP
4 Numerical Analysis
5 Conclusion
References
Hjorth Parameters in Event-Related Potentials to Detect Minimal Hepatic Encephalopathy
1 Introduction
1.1 Diagnostic
1.2 Event-Related Potentials (ERP)
1.3 The P300 Wave and the Oddball Paradigm
1.4 The Hjorth Parameters
2 Methods
2.1 Study Population
2.2 Auditory Stimulation
2.3 Removal of Artifacts
2.4 Segmentation, Baseline Correction, and Averaging
2.5 Feature Extraction of the P300 Wave
2.6 Statistical Analysis
3 Results
4 Discussion
5 Conclusions
References
The V-Band Substrate Integrated Waveguide Antenna for MM Wave Application
1 Introduction
2 Design of Suggested SIW Antenna
3 Discussion and Results
3.1 S11 (Reflection Coefficient) of the Proposed SIW Antenna
3.2 Directivity of the Proposed V-Band Substrate Integrated Waveguide Antenna
3.3 H-Plane and E-Plane Pattern of the Proposed SIW Antenna
4 Conclusion
References
The Influence of an Extended Optical Mode on the Performance of Microcavity Forced Oscillator
1 Introduction
2 Theorical Formalism
2.1 Lorentz Force Densities
2.2 Mechanical Model for Mechanical Oscillations
3 Photonic Microcavity
4 Results and Discussion
5 Conclusions
References
Synthesis and Characterization of Fe3O4@SiO2 Core/shell Nanocomposite Films
1 Introduction
2 Experimental
2.1 Materials
2.2 Synthesis of Fe3O4 NPs
2.3 Core/Shell Fe3O4@SiO2 Preparation
2.4 Synthesis of the Films of the Fe3O4@SiO2 Composites
2.5 Characterization Techniques
3 Results and Discussions
4 Conclusions
References
Optical and Structural Study of a Fibonacci Structure Manufactured by Porous Silicon and Porous SiO2
1 Introduction
2 Materials and Methods
2.1 Fabrication of Porous Silicon and Si-SiO2 Porous Periodic/Quasiperiodic Structure
3 Results and Discussions
3.1 Reflectance Spectrum of a Periodic/Quasiperiodic Structure
3.2 Morphologic Characterization of a Periodic/Quasiperiodic Structure
4 Conclusions
5 Competing Interests
References
Electronics and Communication
Amyloid-β Can Form Fractal Antenna-Like Networks Responsive to Electromagnetic Beating and Wireless Signaling
1 Introduction
2 Results and Discussions
2.1 Live Visualization of Temporal Disintegration of Aβ
2.2 Spontaneous Emergent Communication in Two Distinct Time Domains
2.3 Cooperative Beating in Laser Interferometry
3 Conclusion
4 Competing Interests Statement
Annexure I
References
How Does Microtubular Network Assists in Determining the Location of Daughter Nucleus: Electromagnetic Resonance as Key to 3D Geometric Engineering
1 Introduction
1.1 The Natural Magnetic Field of Biomaterials
1.2 When There is no Centriole: PCM Dynamics is Similar to Centriole, It Reflects, Transforms, Do not Destroy Centriole Dynamics
1.3 Basic Mathematics to Support the Splitting of the Electromagnetic Field at Resonance
1.4 Centriole Has the Closed Loop and Spiral (in/out) Pathways of the Electric and Magnetic Fields, Respectively
1.5 Outline of the Current Study
2 Methodology
2.1 Theoretical Study Design
2.2 Experimental Methods for centriole's Global Positioning System
3 Results and Discussions
3.1 Resonance Characteristic of Five MTOCs
3.2 Study of a Pair of Centriole Assemblies: The Spherical Coordinate System
3.3 Triplet of Cells: How Left–right Symmetry and Directivity of Coupled Cells Are Born
3.4 A Review of the Spherical Coordinate System (Fig. 1c)
3.5 3D Cell–Matrix Study: Orientation Angle Shift and Linear Shifts in the Spherical Coordinate System
3.6 Protein's Electromagnetic Resonance is a Nearly a Century-Old Concept
4 Conclusion
References
Computational Study of the Contribution of Nucleoside Conformations to 3D Structure of DNA
1 Introduction
2 Method
2.1 Molecular Mechanics
2.2 Quantum Mechanics
3 Results
3.1 Energy Minima of Deoxynucleosides Corresponding to Syn Sugar-Base Orientation
3.2 Energy Minima of Deoxynucleosides Corresponding to Three Conformational Classes of dDMPs
4 Discussion
References
Computational Study of Absorption and Emission of Luteolin Molecule
1 Introduction
2 Computational Methodology
3 Results
3.1 Enol and Keto Configurations of the Luteolin Molecule
3.2 Absorption and Emission of the Enol O5 Configuration of the Luteolin Molecule
3.3 Emission of the Keto Configuration of the Luteolin Molecule
4 Conclusions
References
Efficiency of Molecular Mechanics as a Tool to Understand the Structural Diversity of Watson–Crick Duplexes
1 Introduction
2 Methods
2.1 Method of Molecular Mechanics
2.2 Preparation and Characterization of DNA Minimal Fragments
3 Results
3.1 Conformational Classes of DNA with Similar Characteristics
3.2 BA09 Conformational Class of DNA
4 Discussion
References
Conformational Changes of Drew–Dickerson Dodecamer in the Presence of Caffeine
1 Introduction
2 Methods
2.1 Selection of the Drew–Dickerson's Dodecamer Structure
2.2 Selection and Characterization of the Caffeine Molecule Structure
2.3 Molecular Docking
2.4 Molecular Dynamics Simulation
3 Results and Discussion
3.1 Molecular Docking
3.2 Molecular Dynamics Study
4 Conclusions
References
An Adaptive Replacement Strategy LWIRR for Shared Last Level Cache L3 in Multi-core Processors
1 Introduction
2 Literature Review
3 Proposed Work
3.1 Proposed Adaptive Replacement Technique LWIRR
3.2 Working of LWIRR Algorithm
4 Simulation and Testing
5 Performance Analysis
5.1 HIT Rate Analysis
5.2 Execution Time Analysis
6 Conclusion and Future Work
References
ZnO Nanoparticles Tagged Drug Delivery System
1 Introduction
2 Cyclic Voltammetric Analysis of ZnO-Citicholine (Specifically Ecospirin Inside the Capsule of Citicholine)
2.1 Aspirin Hydrolysis into Salicylic Acid and Acetic Acid
2.2 Atorvastatin Oxidation
2.3 Clopidogrel Oxidation
3 Cyclic Voltammetric Analysis of ZnO-Citicholine Pharmaceutical Formulation (White Crystalline Powder Inside the Capsule)
4 Cyclic Voltammetric Analysis of ZnO-Citicholine Along with the Ecospirin Tablet Inside
5 In-Vitro Analysis of the Drug Tagged to ZnO on Human Blood Sample and Conclusions
5.1 Complete Blood Count
5.2 Leucocytes
5.3 Platelets
5.4 Prothrombin Time
5.5 Path Forward
References
Design of a Development Board Based on the Microcontroller ATmega328P, Including a Symmetric Low-Noise Voltage Source
1 Background
1.1 Motivation
2 Development
2.1 Analog Block
2.2 Digital Block
3 Results
3.1 Software
3.2 Biomedical Application
4 Discussion
5 Conclusions
References
Performance Assessment of N+ SiGe-Based Dielectrically Modulated Vertical Tunnel Field-Effect Transistors (DM-VTFET) for Lower Power Biomedical Application
1 Introduction
2 Simulation Models and Device Structure
3 Results and Discussion
4 Conclusion
References
Information Field Experimental Test in the Human Realm: An Approach Using Faraday Shielding, Physical Distance and Autonomic Balance Multiple Measurements
1 Introduction
2 Methods and Instruments
2.1 Experimental Design
2.2 Instruments
2.3 Data Normalization
2.4 Statistical Analysis
3 Statistical Results
4 Qualitative Analysis
5 Discussion
6 Conclusions
References
Author Index
Recommend Papers

Proceedings of Trends in Electronics and Health Informatics: TEHI 2022
 9819919150, 9789819919154

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Networks and Systems 675

Mufti Mahmud · Claudia Mendoza-Barrera · M. Shamim Kaiser · Anirban Bandyopadhyay · Kanad Ray · Eduardo Lugo   Editors

Proceedings of Trends in Electronics and Health Informatics TEHI 2022

Lecture Notes in Networks and Systems Volume 675

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas—UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Türkiye Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

Mufti Mahmud · Claudia Mendoza-Barrera · M. Shamim Kaiser · Anirban Bandyopadhyay · Kanad Ray · Eduardo Lugo Editors

Proceedings of Trends in Electronics and Health Informatics TEHI 2022

Editors Mufti Mahmud Nottingham Trent University Nottingham, UK M. Shamim Kaiser Jahangirnagar University Dhaka, Bangladesh Kanad Ray Amity University Rajasthan Jaipur, India

Claudia Mendoza-Barrera College of Physical and Mathematical Sciences Meritorious Autonomous University of Puebla Puebla, Mexico Anirban Bandyopadhyay National Institute for Materials Science Tsukuba, Japan Eduardo Lugo Faubert Lab School of Optometry University of Montreal Montreal, QC, Canada

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-981-99-1915-4 ISBN 978-981-99-1916-1 (eBook) https://doi.org/10.1007/978-981-99-1916-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Organization

Steering Committee Anirban-Bandyopadhyay, National Institute for Materials Science, Japan Anirban Dutta, The State University of New York at Buffalo, USA Chi-Sang Poon, Massachusetts Institute of Technology, USA J. E. Lugo, University of Montreal, Canada Jocelyn Faubert, University of Montreal, Canada Kanad Ray, Amity University Rajasthan, India Luigi M. Caligiuri, The University of Calabria, Italy Mufti Mahmud, Nottingham Trent University, UK M. Shamim Kaiser, Jahangirnagar University, Bangladesh Subrata Ghosh, Northeast Institute of Science and Technology, India Shamim Al Mamun, Jahangirnagar University, Bangladesh

General Chairs Martha A. Palomino-Ovando, FCFM-BUAP, Mexico Mufti Mahmud, Nottingham Trent University, UK Kanad Ray, Amity University Rajasthan, India

Programme Chairs Anirban Bandyopadhyay, NIMS, Japan J. Eduardo Lugo, University of Montreal, Canada M. Toledo-Solano, FCFM-BUAP, Mexico

v

vi

Conference Secretary C. Mendoza-Barrera, FCFM-BUAP, Mexico M. Shamim Kaiser, Jahangirnagar University, Bangladesh

Workshop/Special Session Chairs Cosimo Ieracitano, University of Reggio Calabria, Italy V. N. Manjunath Aradhya, JSS University, India Noushath Shaffi, College of Applied Sciences, Oman

Tutorial Chairs M. Arifur Rahman, Nottingham Trent University, UK Tanu Wadhera, IIIT Una, India

Publication Chairs J. Eduardo Lugo, University of Montreal, Canada Aldo Yair Tenorio-Barajas, FCFM-BUAP, Mexico Siva Kumar Krishnan, IF-BUAP, Mexico

Publicity Chairs Shamim Al-Mamun, Jahangirnagar University, Bangladesh Abzetdin Adamov, ADA University, Azerbaijan Tianhua Chen, University of Huddersfield, UK Martha A. Palomino-Ovando, FCFM-BUAP, Mexico M. Toledo-Solano, FCFM-BUAP, Mexico C. Mendoza-Barrera, FCFM-BUAP, Mexico

Finance Chairs Martha A. Palomino-Ovando, FCFM-BUAP, Mexico

Organization

Organization

Registration Chairs Abraham N. Meza-Rocha, FCFM-BUAP, Mexico Areli Montes-Pérez, FCFM-BUAP, Mexico IraÍs Bautista-Guzmán, FCFM-BUAP, Mexico Maria Isabel Pedraza-Morales, FCFM-BUAP, Mexico

Local Organising Chairs José J. Gervacio-Arciniega, FCFM-BUAP, Mexico Severino Muñoz-Aguirre, IF-BUAP, Mexico

Website and Social Media Chairs Mónica Macías Peréz, FCFM-BUAP, Mexico Luis Fernando Hernández Sánchez, FCC-BUAP, Mexico

Technical Program Committee A. K. M. Fazlul Haque, DIU, Bangladesh Abraham N. Meza-Rocha, FCFM-BUAP, Mexico Abzetdin Adamov, ADA University, Azerbaijan Aldo Yair Tenorio-Barajas, FCFM-BUAP, Mexico Alexandra Deriabina, FCFM-BUAP, Mexico Ángel Daniel Santana-Vargas, HGEL-RD, Mexico Anirban Bandyopadhyay, NIMS, Japan Anita Garhwal, Sweden Antesar Ramadan M. Shabut, Leeds Trinity University, UK Areli Montes-Pérez, FCFM-BUAP, Mexico Argelia Pérez Pacheco, HGEL-UIyDT, Mexico Aron Laszka, University of Houston, USA Arumona Edward Arumona, TDTU, Vietnam Benito de Celis-Alonso, FCFM-BUAP, Mexico C. Mendoza-Barrera, FCFM-BUAP, Mexico Chee-Ming Ting, UTM, Malaysia Chi-Sang Poon, MIT, USA Cosimo Ieracitano, University of Reggio Calabria, Italy Eduardo Moreno-Barbosa, FCFM-BUAP, Mexico

vii

viii

Organization

Efrain Rubio-Rosas, DITCO-BUAP, Mexico Elena Hernandez-Caballero, FM-BUAP, Mexico Esra Gov, Adana Alparslan Turkes Science and Technology University, Turkey Farhana Sarker, University of Liberal Arts, Bangladesh Fida Hasan, RMIT University, Australia Hadri Hussain, UTM, Malaysia Hussain Nyeem, MIST, Bangladesh Irais Bautista-Guzmán, FCFM-BUAP, Mexico J. Eduardo Lugo, University of Montreal, Canada Jocelyn Faubert, University of Montreal, Canada Jose J. Gervacio-Arciniega, FCFM-BUAP, Mexico Kanad Ray, Amity University Rajasthan, India Kashayar Misaghian, Okinawa Institute of Science and Technology, Japan M. Murugappan, Kuwait College of Science and Technology, Kuwait M. Shamim Kaiser, Jahangirnagar University, Bangladesh M. Toledo-Solano, FCFM-BUAP, Mexico Manjunath Aradhya, JSS University, India Marcos Faundez-Zanuy, Escola Superior Politécnica Tecnocampus, Spain Maria Isabel Pedraza-Morales, FCFM-BUAP, Mexico Martha A. Palomino-Ovando, FCFM-BUAP, Mexico Marzia Hoque Tania, University of Oxford, UK Md. Abu Yousuf, JU, Bangladesh Md. Obaidur Rahman, DUET, Bangladesh Md. Sazzadur Rahman, JU, Bangladesh Md. Wahiduzzaman, UNSW, AUS Md. Abdul Alim, Khulna University, Bangladesh Md. Abu Layek, Jagannath University, Bangladesh Mehdi Sookhak, Texas A&M University-Corpus Christi, USA Mohammad Ali Moni, University of Queensland, Australia Mohammad Sajjad Ghaemi, NRC-Fields Mathematical Sciences Collaboration Centre, Canada Mohammad Shamsul Arefin, CUET, Bangladesh Mufti Mahmud, Nottingham Trent University, UK Muhammad Arif Jalil, UTM, Malaysia Muhammad Golam Kibria, University of Liberal Arts, Bangladesh Muhammad Nazrul Islam, MIST, Bangladesh Nabeel Mohammed, North South University, Bangladesh Nadia Mammone, University Mediterranea of Reggio Calabria, Italy Nasima Begum, University of Asia Pacific, Bangladesh Noushath Shaffi, College of Applied Sciences, Oman Omprakash Kaiwartya, Nottingham Trent University, UK Pushpendra Singh, NIMS, Japan Ramani Kannan, Universiti Teknologi PETRONAS, Malaysia Rashed Majumder, JU, Bangladesh Roberto Giovanni-Chavarría, UNAM-IE, Mexico

Organization

Rubén Conde Sánchez, FCFM-BUAP, Mexico S. M. Riazul Islam, Sejong University, South Korea Severino Muñoz-Aguirre, IF-BUAP, Mexico Shamim Al Mamun, Jahangirnagar University, Bangladesh Shariful Islam, Deakin University, Australia Sheikh Hussain Shaikh Salleh, Heal Ultra PLT, Malaysia Siva Kumar Krishnan, IF-BUAP, Mexico Subrata Ghosh, CSIR Northeast Institute of Science and Technology, India Tawfik Al-Hadhrami, Nottingham Trent University, UK Tianhua Chen, University of Huddersfield, UK Victor M. Altuzar, FCFM-BUAP, Mexico

ix

Preface

The 2nd International Conference on Trends in Electronics and Health Informatics (TEHI 2022) took place from December 7 to 9, 2022 at the Meritorious Autonomous University of Puebla in Puebla, Mexico. TEHI 2022 focuses on experimental, theoretical, and applicable aspects of technology driven innovations in Healthcare, Biomedicine, Artificial Intelligence, and Electronics. Electronics and Healthcare informatics have emerged in the healthcare domain providing an extremely wide variety of solutions using computational techniques. Healthcare informatics covers a spectrum of diverse topics that include the study of the design, development, and applications of computational techniques to improve healthcare. In academia, medical informatics research focuses on applications of computational techniques in healthcare and designing medical devices based on embedded systems. Medical informatics also includes modern applications of neuro-informatics and cognitive informatics in the field of brain research. In technical fields such as computer engineering, software engineering, bio-inspired computing, theoretical computer science, data science, autonomic computing, and behavior informatics along with Electronics, researchers are working to provide reliable solutions for diagnosis and treatment. The Conference aims to provide an opportunity to gather all these researchers, scholars, and experts from academia and industry working in the streams of basic and applied sciences, engineering and technology to share their research findings. The conference on TEHI 2022 attracted 103 full papers from 12 countries in four tracks—Artificial Intelligence and Soft Computing, Healthcare Informatics, IoT and Data Analytics, and Electronics. The conference’s objective is to bring together researchers, educators, and business professionals involved in related fields of research and development. This volume compiles the peer-reviewed and accepted papers presented at the meeting. The submitted papers underwent a single-blind review process, soliciting expert opinions from at least two independent reviewers, the track co-chair, and the respective track chair. The technical program committee has selected 35 high-quality full papers from 11 countries for presentation at the conference, based on recommendations of the reviewers and the track chairs. The event was hosted in a hybrid format and the response from the research community was remarkable. For those with an xi

xii

Preface

interest in Electronics and Health Informatics, this volume will be a treasure of information. We would like to express our gratitude to the Organizing Committee and the Technical Committee members for their unconditional support, particularly the Chairs, the Co-chairs, and the Reviewers. TCCE 2022 could not have taken place without the tremendous work of the team and their gracious assistance. We are grateful to Mr. Aninda Bose, the production team, and other team members of Springer Nature for their continuous support in coordinating this volume’s publication. Last but not the least, we thank all of our contributors and volunteers for their support during this challenging time to make TEHI 2022 a success. Nottingham, UK Puebla, Mexico Dhaka, Bangladesh Tsukuba, Japan Jaipur, India Montreal, Canada January 2023

Mufti Mahmud Claudia Mendoza-Barrera M. Shamim Kaiser Anirban Bandyopadhyay Kanad Ray Eduardo Lugo

Contents

Artificial Intelligence and Soft Computing Experimental Study of High-Frequency Drill String Vibrations Under Different Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vladimir Bakhtin, Mikhail Deryabin, Dmitry Kasyanov, Sergey Manakov, and Denis Shakurov Flexible Systolic Hardware Architecture for Computing a Custom Lightweight CNN in CT Images Processing for Automated COVID-19 Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paulo Aarón Aguirre-Alvarez, Javier Diaz-Carmona, and Moisés Arredondo-Velázquez Dimensionality Reduction in Handwritten Digit Recognition . . . . . . . . . . . Mayesha Bintha Mizan, Muhammad Sayyedul Awwab, Anika Tabassum, Kazi Shahriar, Mufti Mahmud, David J. Brown, and Muhammad Arifur Rahman Obtaining Fractal Dimension for Gene Expression Time Series Using an Artificial Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marco Antonio Esperón Pintos, Jorge Velázquez Castro, and Benito de Celis Alonso Grouping by Mixture of Normals for Breast Cancer in Two Groups, Benign and Malignant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gerardo Martínez Guzmán, María Beatriz Bernábe Loranca, Rubén Martínez Mancilla, Carmen Cerón Garnica, and Gerardo Villegas Cerón A Smart Automation System for Controlling Environmental Parameters of Poultry Farms to Increase Poultry Production . . . . . . . . . . Md. Kaimujjaman, Md. Mahabub Hossain, and Mst. Afroza Khatun

3

17

35

51

63

79

xiii

xiv

Contents

A New Model Evaluation Framework for Tamil Handwritten Character Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. R. Kavitha, Noushath Shaffi, Mufti Mahmud, Faizal Hajamohideen, and Priyalakshmi Narayanan

93

Integrated Linear Regression and Random Forest Framework for E-Commerce Price Prediction of Pre-owned Vehicle . . . . . . . . . . . . . . . 107 Amit Kumar Mishra, Saurav Mallik, Viney Sharma, Shweta Paliwal, and Kanad Ray Personalized Recommender System for House Selection . . . . . . . . . . . . . . . 117 Suneeta Mohanty, Shweta Singh, and Prasant Kumar Pattnaik Healthcare Informatics Epileptic Seizure Detection from EEG Signal Using ANN-LSTM Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Redwanul Islam, Sourav Debnath, Reana Raen, Nayeemul Islam, Torikul Islam Palash, and S. K. Rahat Ali Cognitive Assessment and Trading Performance Correlations . . . . . . . . . 143 J. Eduardo Lugo and Jocelyn Faubert Molecular Docking Study of Oxido-Vanadium Complexes with Proteins Involved in Breast Cancer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Lisset Noriega, María Eugenia Castro, Norma A. Caballero, Gabriel Merino, and Francisco J. Melendez Multi-Level Stress Detection using Ensemble Filter-based Feature Selection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Arham Reza, Pawan Kumar Singh, Mufti Mahmud, David J Brown, and Ram Sarkar A Hybrid Transfer Learning and Segmentation Approach for the Detection of Acute Lymphoblastic Leukemia . . . . . . . . . . . . . . . . . . 175 Ang Jia Hau, Nazia Hameed, Adam Walker, and Md. Mahmudul Hasan Logistic Regression Approach to a Joint Classification and Feature Selection in Lung Cancer Screening Using CPRD Data . . . . . . . . . . . . . . . . 191 Yuan Shen, Jaspreet Kaur, Mufti Mahmud, David J. Brown, Jun He, Muhammad Arifur Rahman, David R. Baldwin, Emma O’Dowd, and Richard B. Hubbard HI Applications for ADHD Children: A Case for Enhanced Visual Representations Using Novel and Adapted Guidelines . . . . . . . . . . . . . . . . . 207 Sandesh Sanjeev Phalke and Abhishek Shrivastava

Contents

xv

IoT and Data Analytics Trimmed-TDL-Based Time-to-Digital Converter for Time-of-Flight Applications Implemented on Cyclone V FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Moisés Arredondo-Velázquez, Lucio Rebolledo-Herrera, Javier Hernandez-Lopez, and Eduardo Moreno-Barbosa CO2 Monitoring System to Warn of Possible Risk of Spread of COVID-19 in Classrooms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Yair Romero López, Ricardo Álvarez González, Rodrigo Lucio Maya Ramírez, and Alba Maribel Sánchez Gálvez High-Power Analysis for Outage Probability and Average Symbol Error Probability over Non-identical κ-μ Double Shadowed Fading . . . . 257 Puspraj Singh Chauhan, Sandeep Kumar, Ankit Jain, and Raghvendra Singh Hjorth Parameters in Event-Related Potentials to Detect Minimal Hepatic Encephalopathy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Luis Fernando Caporal-Montes de Oca, Ángel Daniel Santana-Vargas, Roberto Giovanni Ramírez-Chavarría, Khashayar Misaghian, Jesus Eduardo Lugo-Arce, and Argelia Pérez-Pacheco The V-Band Substrate Integrated Waveguide Antenna for MM Wave Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Shailendra Kumar Sinha, Raghvendra Singh, and Himanshu Katiyar The Influence of an Extended Optical Mode on the Performance of Microcavity Forced Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 H. Avalos-Sánchez, E. Y. Hernández-Méndez, E. Nieto-Ruiz, A. J. Carmona, M. A. Palomino-Ovando, M. Toledo-Solano, Khashayar Misaghian, Jocelyn Faubert, and J. Eduardo Lugo Synthesis and Characterization of Fe3 O4 @SiO2 Core/shell Nanocomposite Films . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 A. J. Carmona Carmona, H. Avalos-Sánchez, E. Y. Hernández-Méndez, M. A. Palomino-Ovando, K. Misaghian, J. E. Lugo, J. J. Gervacio-Arciniega, and M. Toledo-Solano Optical and Structural Study of a Fibonacci Structure Manufactured by Porous Silicon and Porous SiO2 . . . . . . . . . . . . . . . . . . . . 311 María R. Jiménez Vivanco, Raúl Herrera Becerra, Miller Toledo Solano, Khashayar Misaghian, and J. E. Lugo

xvi

Contents

Electronics and Communication Amyloid-β Can Form Fractal Antenna-Like Networks Responsive to Electromagnetic Beating and Wireless Signaling . . . . . . . . . . . . . . . . . . . 323 Komal Saxena, Pushpendra Singh, Parama Dey, Marielle Aulikki Wälti, Pathik Sahoo, Subrata Ghosh, Soami Daya Krishnanda, Roland Riek, and Anirban Bandyopadhyay How Does Microtubular Network Assists in Determining the Location of Daughter Nucleus: Electromagnetic Resonance as Key to 3D Geometric Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Pushpendra Singh, Komal Saxena, Parama Dey, Pathik Sahoo, Kanad Ray, and Anirban Bandyopadhyay Computational Study of the Contribution of Nucleoside Conformations to 3D Structure of DNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 J. A. Piceno, A. Deriabina, E. González, and V. Poltev Computational Study of Absorption and Emission of Luteolin Molecule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 E. Delgado, A. Deriabina, G. D. Vazquez, T. Prutskij, E. Gonzalez, and V. Poltev Efficiency of Molecular Mechanics as a Tool to Understand the Structural Diversity of Watson–Crick Duplexes . . . . . . . . . . . . . . . . . . . 393 Andrea Ruiz, Alexandra Deriabina, Eduardo Gonzalez, and Valeri Poltev Conformational Changes of Drew–Dickerson Dodecamer in the Presence of Caffeine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 César Morgado, Alexandra Deriabina, Eduardo Gonzalez, and Valeri Poltev An Adaptive Replacement Strategy LWIRR for Shared Last Level Cache L3 in Multi-core Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Narottam Sahu, Banchhanidhi Dash, Prasant Kumar Pattnaik, and Anjan Bandyopadhyay ZnO Nanoparticles Tagged Drug Delivery System . . . . . . . . . . . . . . . . . . . . 427 S. Harinipriya and Kaushik A. Palicha Design of a Development Board Based on the Microcontroller ATmega328P, Including a Symmetric Low-Noise Voltage Source . . . . . . . 449 Valentina Bastida Montiel and Marco Gustavo S. Estrada

Contents

xvii

Performance Assessment of N+ SiGe-Based Dielectrically Modulated Vertical Tunnel Field-Effect Transistors (DM-VTFET) for Lower Power Biomedical Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Shailendra Singh, Ankit Jain, and Balwinder Raj Information Field Experimental Test in the Human Realm: An Approach Using Faraday Shielding, Physical Distance and Autonomic Balance Multiple Measurements . . . . . . . . . . . . . . . . . . . . . 475 Erico Azevedo, José Pissolato Filho, and Wanderley Luiz Tavares Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499

About the Editors

Dr. Mufti Mahmud is an associate professor of Cognitive Computing at the Computer Science Department of Nottingham Trent University (NTU), UK. He has been the recipient of the top 2% cited scientists worldwide in computer science (2020), the NTU VC outstanding research award 2021, and the Marie-Curie postdoctoral fellowship. Dr. Mahmud is the coordinator of the Computer Science and Informatics research excellence framework unit of assessment at NTU and the deputy group leader of the Cognitive Computing and Brain Informatics and the Interactive Systems research groups. His research portfolio consists of GBP 3.3 million grant capture with expertise that includes brain informatics, computational intelligence, applied data analysis, and big data technologies focusing on healthcare applications. He has over 15 years of academic experience and over 200 peer-reviewed publications. Dr. Mahmud is the general chair of the Brain Informatics conference 2020, 2021, and 2022; Applied Intelligence and Informatics conference 2021 and 2022; Trends in Electronics and Health Informatics 2022; a chair of the IEEE CICARE symposium since 2017 and was the local organizing chair of the IEEE WCCI 2020. He will serve as one of the general chairs of the 31st edition of the ICONIP conference to be held in Auckland (NZ) in 2024. He is the section editor of the Cognitive Computation, the regional editor (Europe) of the Brain Informatics Journal, and an associate editor of the Frontiers in Neuroscience. During the year 2021–2022, Dr. Mahmud has been serving as the vice-chair of the Intelligent System Application and Brain Informatics Technical Committees of the IEEE Computational Intelligence Society (CIS), a member of the IEEE CIS Task Force on Intelligence Systems for Health, an advisor of the IEEE R8 Humanitarian Activities Subcommittee, the publications chair of the IEEE UK and Ireland Industry Applications Chapter, and the project liaison officer of the IEEE UK and Ireland SIGHT Committee, the secretary of the IEEE UK and Ireland CIS Chapter, and the social media and communication officer of the British Computer Society’s Nottingham and Derby Chapter.

xix

xx

About the Editors

Claudia Mendoza-Barrera is a scientific-researcher at the College of Physical and Mathematical Sciences of the Meritorious Autonomous University of Puebla (BUAP, 2016 to present). Her research focuses on the manufacture, characterization, and bioconjugation of biomaterials at micro and nanoscales through physical and chemical routes for biomedical applications, as well as the manufacture and characterization of biosensors of specific affinity. She was a visiting scholar at NESAC/BIO, Chemical Engineering, and Bioengineering Department, University of Washington (2004–2006, Seattle, USA). She received her Ph.D. (2002), M.Sc. (1996), and bachelor’s (1995) degrees in Physics in the areas of Biomaterials and Theoretical Physics, respectively, from the Department of Physics of the Center for Research and Advanced Studies of the National Polytechnic Institute (CINVESTAVIPN, Mexico) and BUAP (Mexico). She was a fellow of the Organization of American States for postdoctoral studies abroad (2004–2006) and the president and editor of the board of the Mexican Society of Science and Technology of Surfaces and Materials (2013–2014 and 2011–2012, respectively) and Mexican representative of the IUVSTA (2014–2016). She has been a member of the Mexican System of Researchers since 2003; she supervised over 30 doctoral, master’s, bachelor’s, and engineering thesis; she has more than 35 indexed articles and peer-reviewed publications as first author or responsible author and the presentation of more than 100 works in national and international congresses as invited talk, talk or poster. Dr. M. Shamim Kaiser is currently working as a professor at the Institute of Information Technology of Jahangirnagar University, Savar, Dhaka-1342, Bangladesh. He received his bachelor’s and master’s degrees in Applied Physics Electronics and Communication Engineering from the University of Dhaka, Bangladesh, in 2002 and 2004, respectively, and the Ph.D. degree in Telecommunication Engineering from the Asian Institute of Technology, Thailand, in 2010. His current research interests include data analytics, machine learning, wireless network and signal processing, cognitive radio network, big data and cyber-security, and renewable energy. He has authored more than 100 papers in different peer-reviewed journals and conferences. He is an associate editor of the IEEE Access Journal, guest editor of Brain Informatics Journal and Cognitive Computation Journal. Dr. Kaiser is a life member of Bangladesh Electronic Society, Bangladesh Physical Society. He is also a senior member of IEEE, USA, and IEICE, Japan, and an active volunteer of the IEEE Bangladesh Section. He is the founding chapter chair of the IEEE Bangladesh Section Computer Society Chapter. Dr. Kaiser organized various international conferences such as ICEEICT 2015–2018, IEEE HTC 2017, IEEE ICREST 2018, BI2020. Anirban Bandyopadhyay is a senior scientist in the National Institute for Materials Science (NIMS), Tsukuba, Japan. He received his Ph.D. from Indian Association for the Cultivation of Science (IACS), Kolkata, 2005, December, on supramolecular electronics. From 2005 to 2007, he was ICYS Research Fellow NIMS, Japan, and, 2007, is now a permanent scientist in NIMS, Japan. He has ten patents on building artificial organic brain, big data, molecular bot, cancer and Alzheimer drug, fourth circuit element, etc. From 2013 to 2014, he was a visiting scientist

About the Editors

xxi

in MIT, USA, on biorhythms. He worked in World Technology Network, as a WTN fellow, (2009–continued); he received Hitachi Science and Technology Award 2010, Inamori Foundation Award 2011–2012, Kurata Foundation Award, Inamori Foundation Fellow (2011), Sewa Society International SSS Fellow (2012), Japan; SSI Gold medal (2017). Kanad Ray (senior member, IEEE) received the M.Sc. degree in physics from Calcutta University and the Ph.D. degree in physics from Jadavpur University, West Bengal, India. He has been a professor of Physics and Electronics and Communication and is presently working as the head of the Department of Physics, Amity School of Applied Sciences, Amity University Rajasthan (AUR), Jaipur, India. His current research areas of interest include cognition, communication, electromagnetic field theory, antenna and wave propagation, microwave, computational biology, and applied physics. He has been serving as an editor for various Springer book series. He was an associate editor of the Journal of Integrative Neuroscience (the Netherlands: IOS Press). He has been visiting professor to UTM and UTeM, Malaysia, and a visiting scientist to NIMS, Japan. He has established MOU with UTeM Malaysia, NIMS Japan, and University of Montreal, Canada. He has visited several countries, such as the Netherlands, Turkey, China, Czechoslovakia, Russia, Portugal, Finland, Belgium, South Africa, Japan, Singapore, Thailand, Malaysia, for various academic missions. He has organized various conferences, such as SoCPROS, SoCTA, ICOEVCI, TCCE, as a general chair and steering committee member. Eduardo Lugo received his Ph.D. degree in physics from the Autonomous University of Morelos State (UAEMor), Mexico. His research involved experimental and theoretical studies of porous silicon nanostructures. He is a pioneer in making photonic structures based on nanostructures. He was a professor in the Department of Physics at the Autonomous University of Morelos State and later at the Center for Energy Research of the National Autonomous University of Mexico (UNAM). He was a postdoctoral fellow in the Electrical and Computer Engineering Department and the Center for Future Health at the University of Rochester, Rochester, NY, USA, and later at the Photonic Systems Group in the Electrical and Computer Engineering Department at McGill University, Montreal, QC, Canada. He was an associate researcher of the same institution as well. Currently, he is an associate researcher at Faubert Laboratory, School of Optometry, Université de Montréal, Quebec, Canada, and a visiting professor at two Mexican Universities (BUAP and UV). It has contributed with over 90 peer-reviewed publications (articles, conference proceedings, and book chapters) and 36 international patents and patent applications from photonics to human performance, and the NeuroTuner product development. He is a co-founder of Sage Sentinel Company.

Artificial Intelligence and Soft Computing

Experimental Study of High-Frequency Drill String Vibrations Under Different Conditions Vladimir Bakhtin, Mikhail Deryabin, Dmitry Kasyanov, Sergey Manakov, and Denis Shakurov

Abstract In this paper, the experience of measuring high-frequency drilling string vibrations during drilling is presented. Vibration registration is carried out by a borehole noise recorder which has specially been made for this purpose. Three accelerometers are built into its construction design. The measuring axes of those accelerometers form an orthogonal coordinate system. Vibrations are recorded in the frequency band from several hertz to 25 kHz. Measurements are carried out during various drilling operations, i.e., rotary drilling, directional drilling, reaming, making a connection, and circulation. The drill string includes a mud motor. Measurements were performed either during horizontal and vertical drilling. The total record involves several days and its size is approximately 250 GB. The recorded data includes oscillograms separately obtained from each of the three accelerometers and the time dependencies of temperature and internal pressure in the borehole. Since the main goal is investigating high-frequency noise during drilling, frequencies below 1 kHz are partially suppressed by a high-pass filter during registration. Some peculiarities of noise distribution in frequency domain under different drilling conditions are presented in this article. Modified Welch’s method is used for spectrum estimation. Keywords Drill string vibration · High-frequency drilling noise · Noise measurement while drilling

V. Bakhtin (B) · M. Deryabin · D. Kasyanov · S. Manakov Institute of Applied Physics, Nizhny Novgorod, Russia e-mail: [email protected] V. Bakhtin · M. Deryabin Nizhny Novgorod State University, Nizhny Novgorod, Russia D. Shakurov JSC NPF “Geofizika”, Ufa, Russia © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_1

3

4

V. Bakhtin et al.

1 Introduction A drilling facility and an environment have undergone significant vibration loads in the process of drilling. It is necessary to take into account those vibrations when designing drilling equipment, since they determine the failure of the equipment [1, 2]. Those vibrations can also have negative influence on the inner surface of the borehole. Information about the vibration level is also important for the design of borehole acoustic communication systems [3–5]. The spectrum and the level of noise determine the maximum data transfer rate and the frequency range used. Vibrations caused by the drilling equipment operation can be used for estimating the characteristics of the rocks surrounding the borehole [6, 7], and characterize the operation of the drilling tool itself. Thus, the investigation of the drill string vibrations is a highly topical problem. The overwhelming number of investigations of vibrations during drilling is devoted to the low-frequency part of the vibration spectrum (below 500 Hz). Practically, the higher frequency spectrum part has not been explored, although it obviously contains a significant amount of information about the physical processes occurring during drilling. The purpose of this paper is to present the preliminary results related to measurements of high-frequency drill string vibrations.

2 Borehole Noise Recorder An autonomous borehole noise recorder (BNR) was designed and manufactured to measure the high-frequency components of the drill string vibrations. Figure 1 shows the appearance of the BNR case. The BNR case is a cylinder with a diameter of 127 mm and a length of 738 mm, with an internal non-central hole for pumping drilling mud. There are two compartments for the electronic equipment in the body of the BNR case. The BNR is equipped with standard conical thrust thread for the installation into any part of drilling column. The vibrations are measured using three accelerometers. The measuring axes of those accelerometers form an orthogonal coordinate system. The accelerometers are miniature and have a natural resonance frequency of about 90 kHz. The electrical signal from the accelerometers is amplified by a charge amplifier with a tunable

Fig. 1 The appearance of the BNR case

Experimental Study of High-Frequency Drill String Vibrations Under …

5

transmission coefficient, which is automatically adjusted to the current input signal level. The charge amplifier cutoff frequency is 500 Hz. The input path self-noise level scaled to acceleration is in the order of 10–4 (ms−2 )2 /Hz. Digitization of the analog oscillation signal is performed by a 16-bit ADC in which quantization frequency is sufficient for the correct recording of up to 25 kHz noise frequency band. The data is recorded in a large volume internal storage. Temperature and pressure sensors of drilling mud are also built into the BNR. The transfer function of the tunable charge amplifier corresponds to a Butterworth high-pass filter [8]. In the frequency domain, its equation could be presented in the form below: H ( f ) =  2 fc f

K , √ + j 2 ffc − 1

where K is the gain ratio, f c —filter cutoff frequency, and j—the imaginary unit. The slope of transfer function falling fragment at frequencies less than cutoff frequency is 12 dB per octave. Smooth modification of transmitted signal amplitude makes possible vibration investigation at less than f c frequencies. The signal-to-noise ratio limits frequency window extension. In other words, the actual lower frequency limit is located at frequencies where the recorder signal amplitude has the same order as a noise. The current data allows us to analyze vibrations in up to several hertz frequency ranges. In the described experiment, the BNR is integrated into the bottom drill string configuration directly above the mud motor assembly. Drilling is carried out by means of a PDC bit. The measurements are carried out with all drilling operations possible in this case: rotary drilling, directional drilling, reaming, making a connection, and circulation. Data involves records obtained either during horizontal or vertical drilling. Its total size is approximately 250 GB and it includes several days. Figure 2 shows photos of the BNR before and after borehole operation.

3 Measurement Results Figures 3 and 4 show, as an example, the drilling mud temperature and pressure measuring protocols during vertical drilling. The temperature in the borehole varied in the range of 20–40 °C. The mud pressure varies from 18 to 40 MPa depending on the drilling operations. The recording starts and finishes correspond to technological checks, drill string running, or pulling out. Average statistic vibration characteristics are needed for drilling equipment and borehole load estimation and also maximum channel data transfer rate assessment. One of those characteristics is spectral power density. It makes it possible to easily assess noise energy in a certain frequency band. As analyzed realization duration

6

V. Bakhtin et al.

Fig. 2 Photos of the BNR before (leftward) and after (rightward) the drilling cycle

Fig. 3 Temperature change protocol during horizontal drilling investigation. Red lines—fragment of rotary drilling (a); green lines—fragment of directional drilling (b); blue lines—fragment of making a connection (c); magenta lines—fragment of reaming (d); cyan lines—fragment of circulation (e)

rises, statistic characteristics calculation errors are shrinking to zero. This statement is true for ergodic random signals, e.g., vibrations of the drilling string bottom part in the process of drilling. Indeed, it can be shown using small time scales which delivers very changeable vibration spectrogram. However, as analysis time rises, the average power density tends to be constant. Hence, vibrations of the bottom part of

Experimental Study of High-Frequency Drill String Vibrations Under …

7

Fig. 4 Pressure change protocol during horizontal drilling investigation. Red lines—fragment of rotary drilling (a); green lines—fragment of directional drilling (b); blue lines—fragment of making a connection (c); magenta lines—fragment of reaming (d); cyan lines—fragment of circulation (e)

the drilling string were recorded continuously for several days. This period includes multiple drilling cycles, which consist of various operations. Figure 5 shows the acceleration signal for three time scales illustrating the changeableness of drill string lower part vibrations. Due to large amount of data the following algorithm is used for figures plotting. The initial record separates into small fragments. Then, a vertical segment for each fragment is plotted in a graph. The first and the last segment points correspond to minimum and maximum values at the fragment. The segment position on horizontal axis is determined by average segment time. The beginning and the end of the recording are excluded from the data during plotting of Fig. 5, because they correspond to technological checks or drill string running and pulling out. When the time scale has an order of hours (upper graph, Fig. 5), there is a stationarity in acceleration signal: quasiperiodic repeating variations of amplitude are observed. The reason for this is that the same operations are interchanging one another in the process of drilling. Generally, it could be described as a cycle: “rock breaking”—“remained drill cuttings removal”. If the time scale has an order of minutes (middle graph, Fig. 5), it is impossible to say about any stationarity. However, the average vibration level is changing smoothly at this time scale. This effect is probably connected with equipment inertness while switching between operation modes. At the smallest time scale (lower graph, Fig. 5), the acceleration signal is of the form of radio impulse superposition. In all probabilities, each impulse appears due to the impact interaction between the drilling string and borehole borders. Since a rock is inhomogeneous and borehole borders are rough, moments of interactions should be random. The power spectral density was estimated using the modified Welch’s method. The essence of standard Welch’s method is that the original signal recording is divided into overlapping parts [9]. The spectrum for each part is calculated by the Fourier

8

Fig. 5 Acceleration signal at three different time scales

V. Bakhtin et al.

Experimental Study of High-Frequency Drill String Vibrations Under …

9

transform. Then averaging is performed over the resulting array. It is worth noting that the squares of the amplitude spectrum are summed up during this operation. Thus, a relatively high signal-to-noise ratio is achieved by means of frequency resolution deterioration. The main purpose of the investigation is to estimate spectral power density during various drilling operations. Thus, the first step is dividing initial record into fragments which correspond to different operations. Since the BNR is an autonomous device, its clock can’t be synchronized with world time. For this reason, manual matching between the data and drilling protocol was performed. However, errors inevitably take place during the process of flagging due to large amount of data processed. Figure 6 shows acceleration signals for different drilling operations. Plotting algorithm is the same as for Fig. 5. Manual labeling is used for separating fragments into specific drilling operation. Vibration level is significantly different from the average level at some moments of time. The reason for these errors might be either labeling mistakes or rare events. Welch’s method was modified, excluding those errors that impact the final result. The data is analyzed by means of following algorithm. Firstly, the record is divided into 1-min fragments. Then spectral power density for each fragment is calculated via standard Welch’s method. Hemming window function with a length of 8192 samples is used for the analysis. After that, the spectra obtained are grouped into massive according to the drilling operations it belongs to. Anomalous elements are excluded from the massive. In other words, the expression for spectral power density calculation is as follows: S(ω) =

 i

ci

    Fk j (ω)2 j

Wn · N

,

where i is the number of 1-min fragment ci = 0 or 1—fragment includes/excludes flag, j—the number of window position in a fragment and corresponding spectrum Fk j , Wn —window normalizing factor, and N —the number of parts in fragment. Separation into 1-min fragments is justified by the acceleration signal envelope curve, which has the same order (see Fig. 5). A spectrum is treated as anomalous if at least one frequency component is not located between two levels, which are defined by quantils. Quantil is estimated only using data massive for the current drilling operation. At the final stage, the remaining spectra are averaged. Figure 7 shows an example of using the cutoff algorithm for different parameters. The numbers rightward to each spectrogram indicate in what range (in quantils) amplitude spectrum is treated as normal. As the confidence interval shrinks, the amount of data excluded rises. The results of using modified Welch’s method with the same parameters are presented in Fig. 8. The main difference is observed in high-frequency domain, which is the domain of principal interest. As we calculated,

10

V. Bakhtin et al.

Fig. 6 Acceleration signals during different drilling operations, from top downward: circulation, making a connection, rotary drilling, directional drilling, reaming

Experimental Study of High-Frequency Drill String Vibrations Under …

11

Fig. 7 The comparison of using cutoff algorithm with different parameters. Each graph corresponds to spectrogram with some data excluded. Spectral power density is encoded by colors for signal fragment with length of about 1 min. White color relates to excluded data. Cutoff threshold is shown in the right side of each graph

the optimal cutoff thresholds are 1–99% (Fig. 7). Whereby, less than 10% of data is excluded from the investigation. The proposed method is similar to median filtering [10]. Figures 9, 10, 11, and 12 show vibration spectral power density of drilling string bottom part. Figures 9 and 10 correspond to the vertical borehole and Figs. 11 and 12—the horizontal borehole. Type of drilling operation is encoded by line color. Figures 9, 10, 11, and 12 show that (a) vibration level during vertical drilling is lower than during horizontal drilling, (b) vibration character is similar in along and crosswise directions, (c) frequency spectra obtained for vibrations along borehole include more discrete components.

12

V. Bakhtin et al.

Fig. 8 The comparison of using modified Welch’s method with different data cutoff parameters

Fig. 9 Spectral power density of vibrations across the drill string during various operations of vertical drilling

Experimental Study of High-Frequency Drill String Vibrations Under …

13

Fig. 10 Spectral power density of vibrations along the drill string during various operations of vertical drilling

Fig. 11 Spectral power density of vibrations across the drill string during various operations of horizontal drilling

14

V. Bakhtin et al.

Fig. 12 Spectral power density of vibrations along the drill string during various operations of horizontal drilling

4 Conclusion Measurements of high-frequency drill string vibrations during various drilling operations were performed in the presented experiment. Welch’s method modification, which allows excluding mistaken values from experimental data, is proposed. The vibration level during vertical drilling is lower than during horizontal drilling. The vibration character is similar in along and crosswise directions. Frequency spectra obtained for vibrations along borehole include more discrete components. Acknowledgements The research was carried out within the support of Ministry of Science and High Education of the Russian Federation, contract № 075-11-2021-040.

References 1. Yigit AS, Christoforou AP (2006) Stick-Slip and Bit-Bounce Interaction in Oil-Well Drillstrings. ASME J Energy Resour Technol 128(4):268–274 2. Albdiry MT, Almensory MF (2016) Failure analysis of drillstring in petroleum industry: a review. Eng Fail Anal 65:74–85 3. Gao L, Gardner WR, Robbins C, Johnson DH, Memarzadeh M (2008) Limits on data communication along the drillstring using acoustic waves. SPE Reservoir Eval Eng 11(1):141–146 4. Mostaghimi H, Pagtalunan JR, Moon B, Kim S, Park SS (2022) Dynamic drill-string modeling for acoustic telemetry. Int J Mech Sci 218:107043

Experimental Study of High-Frequency Drill String Vibrations Under …

15

5. Shah V, Gardner W, Johnson DH, Sinanovic S (2004) Design considerations for a new high data rate LWD acoustic telemetry system. In: SPE Asia Pacific oil and gas conference and exhibition 6. Myers G, Goldberg D, Rector J (2002) Drillstring vibration: a proxy for identifying lithologic boundaries while drilling. In: Proceedings of the ocean drilling program, scientific result 7. Chen G, Chen M, Hong G, Lu Y, Zhou B, Gao Y (2020) A new method of lithology classification based on convolutional neural network algorithm by utilizing drilling string vibration data. Energies 13(4):888 8. Horowitz P, Hill W (2015) The art of electronics 9. Marple Jr SL, Carey WM (1989) Digital spectral analysis with applications 10. Arce GR (205) Nonlinear signal processing: a statistical approach

Flexible Systolic Hardware Architecture for Computing a Custom Lightweight CNN in CT Images Processing for Automated COVID-19 Diagnosis Paulo Aarón Aguirre-Alvarez, Javier Diaz-Carmona, and Moisés Arredondo-Velázquez

Abstract Millions of deaths worldwide have been resulted throughout the COVID19 pandemic, thus the need of diagnosing techniques for early disease stage has been arisen. Although RT-PCR is the standard test to diagnose SARS-COV-2 infection, factors such as the long waiting time for results and its relatively low accuracy have led to the need of new alternative diagnosis methods. The Convolutional Neural Network (CNN), a powerful and efficient deep learning algorithm, can be applied as an automated diagnosis tool by processing chest Computed Tomography (CT) scanning images of patients with suspected infection. Recent works have shown that low-complexity CNNs accompanied by image preprocessing are sufficient to diagnose COVID-19 with high precision. This fact allows the use of low-end hardware, such as Field Programmable Gate Arrays (FPGAs), to compute these compact models in the microsecond range. In this paper, a flexible hardware architecture to compute a lightweight custom CNN to classify chest (CT) scanning images as COVID and non-COVID is proposed. This system is capable of classifying 23 CT images per second with an accuracy of up to 91% and has remarkable adaptability to different hyperparameters of the convolutional layer, as these are computed by a single systolic array-based convolver. Keywords Convolutional neural network (CNN) · Computed tomography (CT) · Field programmable gate arrays (FPGAs) · Hardware architecture · Systolic array

P. A. Aguirre-Alvarez (B) · J. Diaz-Carmona Electronics Engineering Department, National Institute of Mexico at Celaya, Celaya, Guanajuato, Mexico e-mail: [email protected] J. Diaz-Carmona e-mail: [email protected] M. Arredondo-Velázquez Faculty of Physical and Mathematical Sciences, Meritorious Autonomous University of Puebla, Puebla, Puebla, Mexico © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_2

17

18

P. A. Aguirre-Alvarez et al.

1 Introduction The Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) is a contagious disease that according to the World Health Organization (WHO) up to July 2022 more than 557,917,904 cases have been confirmed, including more than 6,358,899 deaths. The hospital capacity, as well as available medical equipment, may be insufficient compared to the number of cases associated with an outbreak spreading rapidly through the population. Individually, an early disease diagnose notably increases the possibility that the patient survives without the need for intensive and sub-intensive care due to the effectiveness of timely treatment, in addition to controlling the spread of the disease [1]. Although the diagnosis of COVID-19 can use criteria such as clinical symptoms and epidemiological history, there are two notable approaches: laboratory diagnosis, such as tests applied to nucleic acids, antigens, and serology (antibodies), and on the other side, analyzing medical images of the patient’s chest, such as X-rays and computed tomography (CT) [2–4]. In the first approach, Reverse Transcription Polymerase Chain Reaction (RT-PCR)-based testing, where nucleic acid samples are taken from the patient’s airways, is accepted as a standard [1, 3, 5, 6]. However, this type of diagnosis can be seen as a time-consuming test (4–6 h) considering the transmission rate of the disease [7], in addition it has a low sensitivity. As it is mentioned in [2], according to some conservative estimates, the detection rate of this type of test is ranged between 30 and 50%. However, the study carried out in [8] determines a sensitivity of up to 71%. This scenario causes many false negative diagnosis [9, 10] and that the test be repeated several times to confirm the diagnosis [2]. In the second approach, the use of chest CT scanning images as an effective alternative diagnosis technique that has shown a high sensitivity, even better than the RS-PCR testing [1, 11]. The CT scanning provides useful information to the radiologist about the patient’s lung health and allows him to judge whether a patient is infected by viral pneumonia such as that caused by SARS-COV-2 or other viruses. However, it is not possible to determine which virus is causing it, then conventional CT image analysis cannot directly confirm a positive case of COVID-19, but it is considered useful in diagnosing COVID-19 during its outbreak because when the images reveal viral pneumonia, there is a high probability that it is a case of COVID19 due to its great similarity [2]. In order to diagnose the disease, the radiologist analyzes the presence of Ground-Glass Opacities (GGO), consolidation, and pleural effusion inside of the CT-scans [12, 13]. The resounding increase in infections due to an outbreak complicates the correct diagnosis due to the large number of images to be analyzed, which is why the need for an automatic diagnosis system arises; therefore, researchers have become interested in the use of artificial intelligence [1, 3]. The CNNs are one of the key algorithms in image processing field, which has remarkable applications like image classification, image object recognition, and segmentation. These biologically inspired mathematical models are capable of learning numerical representations on digital images [14]. The capability of CNNs to recognize complex patterns overcomes that of humans, it can be the solution to the

Flexible Systolic Hardware Architecture for Computing a Custom …

19

problem related to distinguishing between pneumonia caused by SARS-COV-2 and other respiratory pathologies based on chest CT scanning images. Currently, several previous works have used state-of-the-art CNN to automatically diagnose COVID-19 using binary classification of CT scanning images. Popular networks such as Resnet50, SqueezeNet, Alexnet, the VGGs family, and Inception v3, can achieve high accuracies of 78.29–99.5% [15]. Similarly, custom models such as the proposals [1, 16, 17] report notable accuracy from 85.03% to 93.53%. The complexity of most of the models belonging to both approaches implies a high computational weight, so CPU and even GPU platforms are commonly used to implement them. However, [11] has shown the possibility of effectively diagnosing COVID-19 with low complex CNNs of low computational complexity including an image preprocessing. This fact allows the use of low-end hardware, such as field-programmable gate arrays (FPGAs), to compute these compact models in the microsecond range. If these devices were implemented in CT scanners, an immediate diagnosis could be performed. Some researchers have seen potential in FPGA technology applied to CNN computing due to its reconfigurability, low power consumption, and parallel processing capability [18]; nevertheless, its hardware resources may not be enough. Faced with this problem, the design trend of single-convolver hardware architecture (SCHA) proposes to execute each layer of the network sequentially. The output produced by the current layer is stored for later reading during the execution of the next layer. This design strategy focuses on developing flexible components to operate on different hyperparameters [19]. Given that 90% of the operations present in a CNN are performed in the convolutional layer [19] and the configuration of this type of layer is variable along to the CNN, the mapping of a convolver capable of adapting to different configurations is especially important. The design of a Hardware Architecture (HA) capable of executing convolution layers with different configurations and that, together with other submodules that execute other typical types of simpler layers, could process a complete CNN. This is valuable in two respects: (1) as a case study, a low-complexity CNN for automated diagnosis of COVID-19 based on CT images could be processed on this HA, which is mapped on a resource-limited device. (2) If the design is flexible and scalable enough, it could be used in other applications. In addition, it could be migrated to devices with more hardware resources to process more complex CNNs built by the supported layers. The main contributions of this work applied to the preliminary diagnostic for COVID-19 are as follows: • A custom lightweight CNN model with a low number of convolutional layers capable of classifying CT scanning images from the SARS-COV-2 CT dataset into COVID-19 and non-COVID-19 categories.

20

P. A. Aguirre-Alvarez et al.

• A flexible hardware architecture capable of sequentially processing convolution, ReLU, and maxpooling layers, as well as a fully connected layer for the classification task, where the convolver consists of a systolic array, which is adaptable to the different hyperparameters of a convolutional layer. The dataset used is described in the next section. In the third section, a brief explanation of the general structure of CNNs is given, with special emphasis on convolutional layers. The general operation of the systolic array is briefly exposed in the fourth section. The proposals of the CNN model and the hardware architecture are presented in the fifth and sixth sections, respectively. Finally, the obtained results and the conclusions are described in the seventh and last sections, respectively.

2 SARS-COV-2 CT Scan Dataset Since the appearance of the SAR-COV-2 virus and its spread throughout the world, humanity has faced a long health emergency that has caused millions of deaths. Since early diagnosis increases the chance that the patient will survive without the need for intensive and sub-intensive care, researchers have searched for effective diagnostic methods. Based on recent studies in [20, 21], chest CT scanning image analysis may reveal COVID-19 infection due to common features such as groundglass opacities and consolidation. In [22] was published the “SARS-CoV-2 CT-scan dataset”, a dataset of chest CT scanning images of COVID and non-COVID cases, which images were collected from hospitals of Sao Paulo, Brazil. This dataset is composed of 1252 CT-scans of 60 patients (28 women and 32 men) infected by the SARS-CoV-2 virus and 1230 CT-scans of 60 patients (30 women and 30 men) noninfected by SARS-CoV-2, but that have other pulmonary diseases. In Fig. 1 examples of images are shown. Fig. 1 Examples of chest CT-scans of COVID-19 infected and uninfected patients

Flexible Systolic Hardware Architecture for Computing a Custom …

21

3 Convolutional Neural Networks (CNNs) The CNNs are remarkable deep learning algorithms, which make possible computer vision applications such as face recognition, autonomous vehicles, self-service supermarket, and intelligent medical treatment [23]. These models are composed of layers, where different operations are performed, in the following subsections few of the most used layers are briefly explained. Convolutional Layer. This is the most representative layer of the CNNs. Each (l) (l) (l) convolutional layer receives a feature map set F∈ RM ×N ×CH to be convolved (l) (l) (l) (l) (l) (l+1) (l+1) with a kernel bank K ∈ Rw ×w ×CH ×K and new feature maps G ∈ RM ×N ×K which has resulted in G = F ⊗ K. The convolution operation ⊗ between these Input Feature Maps (IFMs) and kernels (known as filters) to produce a set of Output Feature Maps (OFMs) is given by (l)

G(x, y, k) =

(l)

CH  w  w 

F (x + i − 1, y + j − 1, ch) K(i, j, ch, k),

(1)

ch=1 j=1 i=1

where 1 ≤ x ≤ N (l) , 1 ≤ y ≤ M (l) , and 1 ≤ k ≤ K. The hyperparameters associated to each layer are as follows: the IFM rows (N) and columns (M), the number of filters (K), and the filter size (w). The index (l) defines the lth layer to which every hyperparameter belongs. Dimensions of the elements involved in whichever convolutional layer are illustrated in Fig. 2. Other related parameters are Stride (S) and Padding (P). The first one is used to avoid the kernel be applied to the whole image; hence, a dimension reduction of OFMs is achieved. On the other hand, the kernel operation on IFMs edges is controlled by the P parameter, which lets OFMs preserve the same dimension when it is processed by one layer. Depending on the used CNN topology, a convolutional layer can be associated to a pooling stage [24]. ReLU. This layer applies the rectified linear unit (ReLU) function to every input. Individually, every output value becomes to 0, if its input value is negative. In the opposite case, the output is the same input [25]. Maxpooling Layer. In this type of layer, a nonlinear down-sampling operation is performed on the input. The input is partitioned in sub-regions and the maximum value of each is an output sampled result [26]. Fully Connected layer (FC). This layer is commonly associated with the classification stage of a CNN. The neurons in this type of layer have full connections to all output from the previous layer, and consequently the number of learnable parameters is greater than the convolutional layers. Nevertheless, the simplicity of their sequence of multiplication-accumulation (MAC) operations requires less computational power [26].

22

P. A. Aguirre-Alvarez et al.

Fig. 2 First convolution layer dimensions involved in a CNN

4 Systolic Array A Systolic Array (SA) is usually a matrix of interconnected Processing Elements (PEs), which are individually composed of one MAC unit and internal registers. The SA operand inputs are given by the SA edges and are being propagated through the internal PE registers after each MAC operation. The SA image processing procedure is illustrated in Fig. 3. The data flow, known as Time Multiplexed Stationary Outputs (TMSO), defines that the filter weights be shifted up to down and the masks left to right, so that the partial results are stored in each PE until the computing is completed and is read from the accumulator to begin a new computing. For a given SA column, adjacent output image pixels of the same channel are generated by the PEs of each SA row. Image pixels of different OFM are computed in each SA column [27].

5 Proposed CNN for COVID-19 or non-COVID-19 Classification The proposed CNN model is presented in this section. Since the only information needed from the images is the patient’s lungs, the rest of the image is considered irrelevant. To obtain considerable accuracy using a CNN with a reduced number of layers, irrelevant information was removed from the image using a preprocessing technique. First, the images were resized from an approximate size of 202 × 256 × 2 to 100 × 100 × 2 images. Then through the adaptive thresholding technique, a first binarized image was obtained. Later a not operation was performed on this binary image in order to keep as 1 the pixels of the lung area and 0 the pixels of unnecessary information. However, areas where important information may exist are still omitted, then a morphological closure operation is performed using a squared structural element of size 3 × 3 obtaining a final binary mask. Multiplying each

Flexible Systolic Hardware Architecture for Computing a Custom …

23

Fig. 3 SA image processing procedure

pixel of the original resized image by its corresponding 1/0 value from the binary mask, a filtered image is obtained. This process is illustrated in Fig. 4. The proposed CNN model is depicted in Table 1, which has 45,442 learnable elements and only 2 convolutional layers. The input image pixel values are rescaled to a range from 0 to 1 in the image input layer.

Fig. 4 Dataset image preprocessing procedure

24

P. A. Aguirre-Alvarez et al.

Table 1 Proposed CNN model Layer type

Activations

Learnable parameters Weights

Image input

100 × 100 × 1



Convolution

100 × 100 × 16

7 × 7 × 1 × 16

ReLu

100 × 100 × 16



Max pooling

50 × 50 × 15



Convolution

50 × 50 × 32

3 × 3 × 16 × 32

ReLu

50 × 50 × 32



Max pooling

25 × 25 × 32



Fully connected

1×1× 2

2 × 20,000

Softmax

1×1× 2

Classoutput

1×1× 2

S

P

Bias 1 × 1 × 16

1 × 1 × 32





1

3





2

0

1

1





2

0

















2×1

6 Proposed Hardware Architecture The proposed hardware architecture (HA) is described in this section. Its outstanding feature consists of its adaptability to the following hyperparameters of a convolutional layer: IFM size (height, width, and channels), number of filters, stride, padding, and one of the most important is the filter size. This proposal belongs to the singleconvolver HA design trend, which is characterized by sequentially computing layer by layer of the target CNN model composed of N L layers. This HA supports CNN models built only by sequences of convolution, ReLU, and maxpooling layers, as well as a FC layer for the classification task. The following subsections provide a general explanation of a target CNN processing and a detailed description of some key components.

6.1 General Description of the Processing Sequence Before running the CNN, the following data is pre-stored in separated on-chip ROMs: convolutional layer weights, FC layer weights (one ROM per class), biases belonging to the convolutional and FC layer (stored together), and the configuration data corresponding to each layer of the CNN. The proposed hardware architecture is depicted as a block diagram in Fig. 5, the hardware module and the data flow are represented as a block and an arrow, respectively. The thin black arrows represent a single n-bit data, the thick white arrows correspond to nine n-bit data, and the thick blue arrows represent an n-bit data 2D array. Note that previously mentioned ROMs are not included in this diagram. Each module has Local Control Units (LCUs) of different hierarchies, which control several subprocesses involved in the processing of each layer of the target CNN. The

Flexible Systolic Hardware Architecture for Computing a Custom …

25

Fig. 5 Proposed hardware general block diagram

adaptability of this HA to different hyperparameters relies on the flexibility of its components and a configurable control logic. During processing, the configuration module provides the current layer’s hyperparameters to most architecture modules through a vector of bits called “configuration data”. With this encoded data, each module configures its control logic. On the other hand, the Main Control Unit (MCU) activates the LCUs of the modules involved in the processing of a specific layer. After processing a given layer, the MCU resets the recently used modules and initializes the modules involved in the next layer. In parallel, the configuration module provides the configuration data for the next layer. The borders of these two modules consist of dashes and dots to represent that the rest of all modules are connected to them. Before to the CNN execution, an M x N input image is loaded into input RAM from outside through an interface. As mentioned above, this HA processes sequences of basic layers. The processing of each layer type is briefly explained below. Convolutional layer. The modules involved in this type of layer are (a) programmable line buffer (PLB), (b) flexible convolver, (c) bias module, (d) FIFO array, and (e) CNV RAM block. The computation of a convolutional layer is done one input channel at time. The term “convolution iteration” refers to the process in which an entire IFM belonging to an input channel is processed with K 2D filters of the same channel. During a convolution iteration, the PLB sequentially generates 2D convolution masks from an IFM and sends them to the flexible convolver. The PLB (detailed in Sect. 6.2) supports different IFM sizes, stride, and padding. Besides it is capable of generating convolution masks of variable size as is exemplified in the Fig. 5, where 3 × 3, 5 × 5, and 7 × 7 masks are generated, represented as blue, green, and purple, respectively. During the processing of a convolutional layer, only if this is the first, the IFM data is read from the input RAM; otherwise, they are read from the MXP RAM block. The correct data enters to the PLB through a multiplexer placed before. All the mathematical operations involved in a convolution iteration are performed by the flexible convolver. This module uses intermediate data buffers to temporarily store the generated mask and pre-stored filters, so that they are convolved into a flexible systolic array, as explained in Sect. 6.2. The most notable feature of the

26

P. A. Aguirre-Alvarez et al.

flexible systolic array is to perform the convolution operation supporting different filter sizes. During a convolution iteration, partial pixel results from multiple OFMs are produced simultaneously and sent in parallel to the bias module. If the partial results belong to the last convolution iteration of the layer, the output of this module will be the bias value added to the partial results, otherwise its output will be the same input data. This operation is performed immediately, that is, the partial results are available, added or not to the bias as the partial results arrive. Output data from the bias module is captured and organized by the FIFO array to be stored in the CNV RAM block. The FIFO array consists of three FIFO blocks, each FIFO block has three FIFOs and an address generator. This is done to temporarily store the results produced simultaneously during a convolution iteration, which can be up to nine every four clock cycles. The FIFO array writes up to three partial results in parallel to the CNV RAM block. During the actual convolution iteration, every written partial result is the sum of the previous stored value and the current partial result obtained from the bias module. CNV RAM block contains three dualport RAMs internally. The LCUs inside the FIFO array allow writing and reading addresses to be generated in such a way that the current partial result (obtained from the FIFO array) and the previous result (RAM output) are available at the same time to be added and the updated result can be written. The ReLU layer computation is performed by three comparators at the output of the RAM block. Then, when the maxpooling module requests data from the CNV RAM block to put into PLB, only positive numbers are entered. Maxpooling Layer. During the computation of the maxpooling layer the PLB reconfigures itself to simultaneously produce three 2 × 2 masks (shown in Fig. 5 highlighted in red). This strategy takes advantage of the small mask size used in maximum pooling compared to the other sizes supported by PLB. As well as the fact that the input data comes from three separate RAMs. Then, the maxpooling module generates the addresses for reading data from the CNV RAM block. These masks are read by the maxpooling module and the maximum value of each one is determined with an internal set of comparators. Up to three results are written simultaneously to the MXP RAM block. After processing this layer, depending on the number of layers in the network, a convolution layer or a FC layer will be processed. FC Layer. A second LCU within the convolver generates the necessary addresses to obtain data from the MXP RAM block and weights from the Class ROMs, respectively. Then using multiplexers, some PEs belonging to the SA will multiplyaccumulate these operands sequentially. At the end of this process, the bias is added to the results thanks to the bias module to obtain the total values of each class. Finally, the classifier module sequentially compares the results and determines the class of the input image.

Flexible Systolic Hardware Architecture for Computing a Custom …

27

Fig. 6 Programmable line buffer strategy for generating 2 x 2, 3 x 3, 5 x 5 or 7 x 7 masks

6.2 Detailed Description of the Key Architecture Modules Configuration module. This component consists of a ROM-on-chip ANN, an address generator controlled by the MCU. The configuration data has a length of 46 bits. The hyperparameters as well as their length in bits are as follows: filter numbers (8 bits), padding (4 bits), stride (4 bits), mask size (4 bits), IFM height (8 bits), IFM width (8 bits), IFM channels (8 bits), and layer type (2 bits). Programable Line Buffer. The PLB strategy to generate convolution masks of different sizes as well as a set of 2 × 2 maxpooling masks is presented in Fig. 6. These masks are formed by the output of the registers (flip-flop type D) within the dotted boxes of purple (7 × 7), green (5 × 5), blue (3 × 3), and red (2 × 2). However, this architecture is scalable to any mask/filter size. During convolution layer computation, IFM data coming from Input RAM or MXP RAM block enter to the PLB. Also, the selection positions of the red multiplexers are set to 0. IFM data can access to the R1,1 register through the black multiplexer depending on the value of the Padd_val signal. Then, to generate a 7 × 7 mask, the data is transferred at each clock cycle through registers and Programmable Shift Registers (PSR) following the direction indicated by the black arrows. However, to generate smaller masks, the blue multiplexers are configurated (based on the configuration data) to skip the unnecessary registers. After certain latency a convolution mask is completed and then it can be parallel transferred to a set of registers placed before to the Systolic array named mask buffer (MaB). In general, when a new convolution layer is processed not only the w parameter is changed, but the IFM size, the value of which in lower layers is smaller than in higher ones. The flexibility feature is achieved in this aspect through the PSRs, which are composed of a set of shift registers (which lengths are powers of two) that can be connected in cascade

28

P. A. Aguirre-Alvarez et al.

or skipped according to IFM width. The shown PSR has a maximum length of 31 registers, but this system is scalable to the desired length. A component named synchronizer is included in the PLB, which notifies to other modules when a generated mask(s) is/are valid(s) to be processed. This is done according to the values of P and S. When P > 0 the size of the IFM increases. Then, depending on the position of the input pixel (column and row) within the augmented IFM, it may not be read from memory and transfer zero value to the PLB instead. During maxpooling layer computation, the PLB generates three 2 × 2 masks simultaneously. This is because the red multiplexers are configured to pass data from each RAM of the CNV RAM block. Also, some blue multiplexers pass data from some registers of the second register column following the red arrows. Systolic Array. The flexible SA diagram is shown in Fig. 7. This component is mainly used to perform all the mathematic operations of the convolutional layers. However, some of the PEs in the first row of the matrix are also used in the computation of an FC layer. This is because a single convolution between a flattened filter and a flattened mask is quite similar to the sequence of MAC operations performed between each element of the final flattened feature maps and the weights belonging to a class. Besides, the reuse of multipliers within PEs constitutes a strategy to compute an FC layer without consuming more of these commonly scarce hardware resources within an FPGA. During the computation of the convolution layer, the convolution masks are received through shift registers (mask buffers) as the PLB is generating the masks, which are parallel transferred to each register. Hence, the data transfer is made immediately. Previously, the convolution weights are loaded in sets of shift registers named Weight Buffers (WeBs) to begin their processing once valid convolution windows be available. Before the weights load and according to the defined w value, the WeB length is adjusted by multiplexers in order to achieve a weight loading in the nearest positions to the PEs. The TMSO data flow is performed by the SA and its LCU allows the simultaneous reading of the PE result by the FIFO array for each column

Fig. 7 Flexible systolic array capable to convolve filters with sizes of 3 × 3 and 5 × 5, as well as to compute a fully connected layer

Flexible Systolic Hardware Architecture for Computing a Custom …

29

when available, the corresponding bias values can be added with these results by the placed bias module before the FIFO array. The number of the PE rows depends on the square of the minimum w value to be processed due to the use of a feedback path for weights transferring to the PE input to be reused with the subsequence convolution masks. When using a greater w value, after the weight was transferred to all the PEs in one column, it must be back to the position number given by the subtraction of square w and the PE rows number. For example: when w = 5, the weight value must be back to the 16th position of the nearest WeB register to the first row of PEs. The vertical shift function is to sequentially provide the filters coefficients for each one of the loaded masks, in this way the window parallelism, given by the mask buffers number, is exploited. The number of PE columns allows to use filter parallelism. The greater number of columns, the more parallel processing filters. During computation of the FC layer, a secondary LCU configures the red multiplexers to simultaneously use the same flattened feature map data from the MXP RAM block in MAC operations. Similarly, the purple multiplexers are configured to pass the weights belonging to each class directly from each Class ROM. This strategy is intended to speed up the FC layer computation because there are low opportunities to apply operation parallelisms. This is because the flattened feature maps are the only reusable data.

7 Experimentation and Results The pre-processed dataset was divided in training, validation, and testing sets in percentages of 70, 20, and 10, respectively, then the custom CNN model was trained in MATLAB. Although the total dataset is relatively small, this scenario is common in medical image datasets. This is closely related to privacy issues. The training parameters included a maximum of 200 epochs, with a shuffle after each epoch, 300 mini-batch sizes, a piecewise learning rate schedule, learn rate drop period of 50, learn rate drop factor of 0.9, Adam training algorithm, and initial learn rate of 0.001. The process took 24 min and 32 s using a GeForce 940MX NVIDIA GPU. The final accuracy reported by MATLAB software at the end of the training using the validation set was 93.16%. However, using MATLAB the model classified every image belonging to test set, which figures as unknown images for the CNN. These inferences were compared to the test set labels in order to quantify the true positive, true negative, false positive, and false negative predictions. Through these data, accuracy, precision, recall, and F1 score were calculated obtaining values of 0.9113, 0.8815, 0.952, and 0.9154, respectively. The obtained accuracy is compared to other reported custom CNN models in Table 2. The proposed HA was implemented in VHDL using Quartus Prime Lite 20.1. The CNN results were obtained through ModelSIM simulation using as reference the FPGA Cyclone V SE 5CSXFC6D6F31C6N available in the ALTERA DE10Standard board. The employed methodology is not limited to a particular device, the

30

P. A. Aguirre-Alvarez et al.

Table 2 Comparation with previous custom CNN models Polsinelli [1]

Ghani [11]

Hafiz [16]

Wang [17]

This work

Conv. layers

3

1

10

6

2

Accuracy (%)

85.03

95.3

93.56

89.5

91.13

design can be migrated to any FPGA meeting the hardware resources for the desired application. The number of registers of n-bits (REGsARCH ) required in the most representative components of the proposed architecture: PLB, SA, WeBs, MaBs, Bias Module, and Classifier as well as the number of multipliers (MUL) employed in the SA architecture (MUL SA ) are, respectively, given by Eqs. 2 and 3. The registers consumed by LCUs and address generators are not included.   2  − 1 2a+1 − 1 + wMAX REGARCH = wMAX + wMAX 2 2 4 2 + 2(wMIN )(wMAX ) + 3wMIN + wMIN +2 4 MULSA = wMIN ,

(2) (3)

where wMIN and wMAX are the minimum and maximum filter sizes, respectively. The length of the cascade flip flops with the greatest number of positions in the PSR (a) is defined as   a = log2 (N + 2P − wMAX ) .

(4)

Based on the proposed CNN configuration, the REGARCH value is equal to 7575. On the other hand, the obtained processing time (PT) is one of the most interesting results, since this value represents the potential of the HA for future applications in real-time image processing. The ideal maximum processing time of a convolutional layer (IMPT C ) is a function of the number of clock cycles required for the PLB mask generation, the pixels and weights transfer to MaBs and WeBs, respectively, as well as the MAC operations execution and shift operations along the data processing. The IMPT C value is given by IMPTC =

1 fOP



 3 + CH 3 + 

K 2 wMIN

      2 + K 2 + w2  5 + 2(M + 2P)(N + 2P) + 2 GM w2 + R + w2 − 1 + wMIN ,

(5) where f op is the working frequency, GM is the number of groups of the w2 MIN masks to be processed truncated to the lower integer number, and R is the rest number of windows not being part of GM . On the other hand, the maximum processing time of a maxpooling layer MPT M and the maximum processing time of a FC layer are calculated in Eqs. 6 and 7.

Flexible Systolic Hardware Architecture for Computing a Custom …

MPTM = MPTF =

1 fOP



31

CH (2 + 3(3 + 2(M )(N ))), 2 wMIN

1 (2 + CH (4 + (M )(N )) + C), fOP

(6) (7)

where M and N in Eq. 6 refer to the width and height of the IFMs before the flatten operation, while C to the classes number. In this work, based in [28], which counts the multiplications and sums as separated operations, the number of mega operations performed in a convolutional (MOpC ) as well as FC layer (MOpF ) are given by Eqs. 8 and 9, respectively. The number of operations involved in the processing of a CNN gives an idea of the computational weight that the device supports. In addition, the architecture’s throughput (Thrpt) is an indicator of how efficient the calculation of the workload is. This is defined as the ratio of the operations and the processing time. MOpC =





N + 2P − w (CH )(K)(2w2 + 1) M + 2P − w + 1 + 1 , S S 1 × 106 MOpF =

2(M )(N )(CH )(K) . 1 × 106

(8) (9)

The IMPT processing time obtained from simulation (PTS) at 50 MHz, the Mop and the throughput of each layer are illustrated in Table 3. Based in the total PTS, the proposed architecture is capable to process 23 frames per second. In order to provide a better perspective of the results, the proposal is compared with other works in Table 4. Since the presented CNN is a custom model, the computational weight of each work is expressed as the sum of operations performed in each convolutional layer. This is computed through Eq. 8 according to the configuration of each CNN, the rest of evaluation criteria is reported by the authors. ws represents the supported filter size. A good estimation of the expected processing time for the proposed CNN is provided by the defined equations. Some differences between the IMPT C and PTS can be observed, this is due to the parallel processing for results transferring to memory, masks, and results generation. The processing for a w value close or equal Table 3 Performance in each layer of the proposed CNN model Layer

IMPT (µs)

PTS (µs)

(MOp)

Throughput (GOp/s)

15.84

3.0572

Convolution 1

5275.28

5181.19

Maxpooling 1

2400.44

2400.44

Convolution 2

33,197.82

33,220.86

Maxpooling 2

1200.88

1200.88





402.64

402.64

0.08

0.1986

42,477.06

42,406.01

40.24

FC Total

– 24.32

– 0.732

32

P. A. Aguirre-Alvarez et al.

Table 4 Comparation between proposed HA and reported works Work

CNN/Dataset

MUL

ws

Freq (MHz)

Conv (MOp)

Ghani [11]

Custom for SARS-COV-2

9

3

100

0.0608



0.046

Shan [29]

Custom for MNIST

571

5

50

0.4112

1.196



Wu [30]

VGG-16 for ImageNet

220

3

200

This work

Custom for SARS-COV-2

81

3, 5, 7 50

32.398 × 103 40.16

Thrpt (Gop/s)

105.6 3.0572

PT (ms)

442.7 42.47

to wMIN implies a faster result generation, hence the result transferring time is the dominant one. The accuracy of the presented custom CNN model exceeds the 71% that can be expected from an RT-PCR test, in addition to those achieved by the works [1, 7] despite having fewer convolutional layers. Due to the low number of convolutional layers, the number of learnable parameters is considerably less than in S–o-A CNNs [15], favoring its implementation in hardware with scarce on-chip memory. The accuracy and simplicity obtained by Ghani in [11], which are closely related to the used preprocessing technique, are higher than the proposed CNN. However, the most notable contribution of the present paper relies on a HA capable of supporting different hyperparameters and, consequently, different CNN models (built by convolution, maxpooling, ReLU, and FC layers). Thus, the proposed CNN represents a case study that sustains the use of this HA in different applications. Since the CNN proposed by Ghani [11] involves very few operations and its HA is very simple, its PT is more attributable to the low complexity of its CNN than its HA. On the other hand, the required number of multipliers in Shan’s architecture is considerably greater than the resulted one with this work, but their throughput is lower. Lastly, Wu [30] proposes an HA with notable results, but like the other works only one filter size is supported, which contributes to the reported high performance. The results obtained by the proposed HA could be improved if the HA was scaled on a device with more resources. On the other hand, the proposed HA allows exploring different CNNs for different applications without being restricted to S–o-A CNNs and, as in this case study, these are not the only option.

8 Conclusions The implementation of lightweight CNN in low-end hardware for the classification of CT images represents an immediate method with high accuracy to the diagnosis of COVID-19. The proposed hardware architecture adopts an SCHA approach where, considering the limited resources of FPGAs, it sequentially processes the layers of the

Flexible Systolic Hardware Architecture for Computing a Custom …

33

CNN through the flexibility of its components. This hardware proposal provides great adaptability to different hyperparameters of the convolution layers, which allows exploring different configurations of this layer to achieve better feature extraction. In addition, its design is scalable. The accuracy of the proposed CNN model is up to 91% and the HA processes up to 23 CT images per second.

References 1. Polsinelli M, Cinque L, Placidi G (2020) A light CNN for detecting COVID-19 from CT scans of the chest. Pattern Recogn Lett 140:95–100. https://doi.org/10.1016/j.patrec.2020.10.001 2. Wang S, Kang B, Ma J, Zeng X, Xiao M, Guo J, ... Xu B (2021) A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19). Eur Radiol 31(8):6096–6104 (2021). https://doi.org/10.1007/s00330-021-07715-1 3. Ozsahin I, Sekeroglu B, Musa MS, Mustapha MT, Uzun Ozsahin D (2020) Review on diagnosis of COVID-19 from chest CT images using artificial intelligence. Comput Math Methods Med. https://doi.org/10.1155/2020/9756518 4. Shi F, Wang J, Shi J, Wu Z, Wang Q, Tang Z, ... Shen D (2020) Review of artificial intelligence techniques in imaging data acquisition, segmentation, and diagnosis for COVID-19. In: IEEE reviews in biomedical engineering, vol 14, pp 4–15. https://doi.org/10.1109/RBME.2020.298 7975 5. Castiglione A, Vijayakumar P, Nappi M, Sadiq S, Umer M (2021) COVID-19: automatic detection of the novel coronavirus disease from CT images using an optimized convolutional neural network. IEEE Trans Industr Inf 17(9):6480–6488. https://doi.org/10.1109/TII.2021. 3057524 6. Yu F, Du L, Ojcius DM, Pan C, Jiang S (2020) Measures for diagnosing and treating infections by a novel coronavirus responsible for a pneumonia outbreak originating in Wuhan, China. Microbes Infection 22(2):74–79. https://doi.org/10.1016/j.micinf.2020.01.003 7. Zhao J, Zhang Y, He X, Xie P (2020) Covid-ct-dataset: act scan dataset about covid-19, 490. arXiv:2003.13865. https://doi.org/10.48550/arXiv.2003.13865 8. Fang Y, Zhang H, Xie J, Lin M, Ying L, Pang P, Ji W (2020) Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. DOI: https://doi.org/10.1148%2Fradiol.202 0200432 9. Yan T, Wong PK, Ren H, Wang H, Wang J, Li Y (2020) Automatic distinction between COVID19 and common pneumonia using multi-scale convolutional neural network on chest CT scans. Chaos Solitons Fract 140:110153. https://doi.org/10.1016/j.chaos.2020.110153 10. Mishra NK, Singh P, Joshi SD (2021) Automated detection of COVID-19 from CT scan using convolutional neural network. Biocybernetics Biomed Eng 41(2):572–588. https://doi.org/10. 1016/j.bbe.2021.04.006 11. Ghani A, Aina A, See CH, Yu H, Keates S (2022) Accelerated diagnosis of novel coronavirus (COVID-19)—computer vision with convolutional neural networks (CNNs). Electronics 11(7):1148. https://doi.org/10.3390/electronics11071148 12. Singh VK, Kolekar MH (2022) Deep learning empowered COVID-19 diagnosis using chest CT scan images for collaborative edge-cloud computing platform. Multimed Tools Appl 81(1):3– 30. https://doi.org/10.1007/s11042-021-11158-7 13. Wang D, Hu B, Hu C, Zhu F, Liu X, Zhang J, ... Peng Z (2020) Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus–infected pneumonia in Wuhan, China. Jama 323(11):1061–1069. DOI: https://doi.org/10.1001/jama.2020.1585 14. LeCun Y, Bengio Y, Hinton G (2015) Deep Learn Nat 521(7553), 436–444. https://doi.org/10. 1038/nature14539

34

P. A. Aguirre-Alvarez et al.

15. Fouladi S, Ebadi MJ, Safaei AA, Bajuri MY, Ahmadian A (2021) Efficient deep neural networks for classification of COVID-19 based on CT images: virtualization via software defined radio. Comput Commun 176:234–248. https://doi.org/10.1016/j.comcom.2021.06.011. Epub 2021 Jun 16. PMID: 34149118; PMCID: PMC8205564 16. Hafiz KN, Haque KF (2022) Convolutional neural network (CNN) in COVID-19 detection: a case study with chest CT scan images. https://doi.org/10.36227/techrxiv.19646535.v2 17. Wang S, Kang B, Ma J et al (2021) A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19). Eur Radiol 31:6096–6104. DOI: https://doi.org/10.1007/ s00330-021-07715-1 18. Tao Y, Ma R, Shyu ML, Chen SC (2020) Challenges in energy-efficient deep neural network training with fpga. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 400–401 19. Arredondo-Velazquez M, Diaz-Carmona J, Barranco-Gutierrez AI, Torres- C (2020) Review of prominent strategies for mapping CNNs onto embedded systems. IEEE Lat Am Trans 18(05):971–982. https://doi.org/10.1109/TLA.2020.9082927 20. Ng MY, Lee EY, Yang J, Yang F, Li X, Wang H, ... Kuo MD (2020) Imaging profile of the COVID-19 infection: radiologic findings and literature review. Radiol: Cardiothorac Imaging 2(1) 21. Kong W, Agarwal PP (2020) Chest imaging appearance of COVID-19 infection. Radiol: Cardiothorac Imaging 2(1). https://doi.org/10.1148/ryct.2020200028 22. Soares E, Angelov P, Biaso S, Froes MH, Abe DK (2020) SARS-CoV-2 CT-scan dataset: a large dataset of real patients CT scans for SARS-CoV-2 identification. MedRxiv. https://doi. org/10.1101/2020.04.24.20078584 23. Li Z, Liu F, Yang W, Peng S, Zhou J (2021) A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans Neural Netw Learn Syst. https://doi.org/10. 1109/TNNLS.2021.3084827 24. Arredondo-Velázquez M, Diaz-Carmona J, Torres-Huitzil C, Padilla-Medina A, Prado-Olivarez J (2020) A streaming architecture for convolutional neural networks based on layer operations chaining. J Real-Time Image Proc 17(5):1715–1733. https://doi.org/10.1007/s11554-019-009 38-y 25. He J, Li L, Xu J, Zheng C (2018) ReLU deep neural networks and linear finite elements. arXiv: 1807.03973. https://doi.org/10.4208/jcm.1901-m2018-0160 26. Li D, Chen X, Becchi M, Zong Z (2016) Evaluating the energy efficiency of deep convolutional neural networks on CPUs and GPUs. In: 2016 IEEE international conferences on big data and cloud computing (BDCloud), social computing and networking (SocialCom), sustainable computing and communications (SustainCom)(BDCloud-SocialCom-SustainCom). IEEE, pp 477–484. https://doi.org/10.1109/BDCloud-SocialCom-SustainCom.2016.76 27. Samajdar A, Zhu Y, Whatmough P, Mattina M, Krishna T (2018) Scale-sim: systolic CNN accelerator simulator. arXiv:1811.02883. https://doi.org/10.48550/arXiv.1811.02883 28. Cavigelli L, Benini L (2016) Origami: A 803-GOp/s/W convolutional network accelerator. IEEE Trans Circuits Syst Video Technol 27(11):2461–2475. https://doi.org/10.1109/TCSVT. 2016.2592330 29. Shan D, Cong G, Lu W (2020) A CNN accelerator on FPGA with a flexible structure. In: 2020 5th international conference on computational intelligence and applications (ICCIA). https:// doi.org/10.1109/ICCIA49625.2020.00047 30. Wu D, Song J, Zhuang H (2021) A new accelerator for convolutional neural network. In: 2021 40th Chinese control conference (CCC). https://doi.org/10.23919/CCC52363.2021.9549407

Dimensionality Reduction in Handwritten Digit Recognition Mayesha Bintha Mizan , Muhammad Sayyedul Awwab , Anika Tabassum , Kazi Shahriar , Mufti Mahmud , David J. Brown , and Muhammad Arifur Rahman

Abstract For visualization, the concept of dimension is normally enclosed to 2– 3 degrees in individuals. A computing node can extend it significantly. However, any increase in the number of dimensions usually introduces an extra computational burden, and it becomes more challenging to extract the exact information. Therefore, dimensionality reduction methods are an increasingly important area of study to help identify methods to mitigate challenges associated with high-dimensional feature sets. Handwritten digit recognition is one of the most relevant fields of study due to the variety of issues faced such as the age of texts, the professional context and norms in which the text is written, and individual differences in writing styles. Research on handwritten digit recognition using various algorithms has been conducted in a variety of languages. In the Bangla character set, there are ten digits. Due to geometry, complicated forms, and similarities between the individual numerals, individual characters are difficult to identify. Also, there are limited open datasets available to researchers to conduct Bangla digit recognition upon. This work discusses dimensionality reduction techniques used in the Bangla Handwritten Digit dataset NumtaDB. Principal Component Analysis (PCA), Neighborhood Component Analysis (NCA), and Linear Discriminant Analysis (LDA) algorithms are examined as feature extraction techniques. CNN is a deep learning technique to classify the input automatically. Over the years, CNN has found a good grip over classifying images for computer visions and now it is being used in other domains too. The numeric digits are then classified using CNN utilizing the lower dimension vectors acquired. These models can recognize most of the digits successfully with a satisfactory level of performance for identifying different digits. M. B. Mizan (B) · M. S. Awwab · A. Tabassum · K. Shahriar Department of Information and Communication Technology, Bangladesh University of Professionals, Dhaka, Bangladesh e-mail: [email protected] M. Mahmud · D. J. Brown · M. A. Rahman Department of Computer Science, Nottingham Trent University, Nottingham NG11 8NS, UK e-mail: [email protected] M. Mahmud · D. J. Brown CIRC and MTIF, Nottingham Trent University, Nottingham NG11 8NS, UK © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_3

35

36

M. B. Mizan et al.

Keywords Dimensionality reduction · Bangla handwritten digit recognition · CNN

1 Introduction Machine learning has been reshaping technological developments over the last couple of decades and playing a crucial role in a diverse range of domains, e.g., healthcare [23, 25, 26, 31], computational biology [3, 34, 38], image processing [12, 16, 45], epidemiological study[40], language translation [11, 37], text processing [1, 27], and social inclusion [24, 35, 36]. One major challenge common to these domains of application is the processing of high dimensional, high volumes of data. High-speed node computational nodes and technological developments of parallel computation have flourished however, machine learning models applied for daily life use. Where computational ability is moderate or low, it remains important to reduce the dimensionality of features while keeping performance at a satisfactory level. Humans can often only see two or three dimensions. Any rise in the number of dimensions frequently makes it more challenging for anyone to visualize. As a result, machine learning researchers typically employ dimensionality reduction techniques to circumvent the challenges of dimensionality associated with big feature sets. While dimensionality reduction has been used with English datasets, very little work has been carried out with Bangla handwritten digit datasets. However, classificationbased work related to Bangla handwritten digit recognition using CNN, Deep CNN, etc., has been carried out. Our aim is to test CNN and dimensionality reduction methods with datasets containing Bangla handwritten digits to investigate comparative performance. This research examines the impact of three pioneering dimensionality reduction techniques on ML algorithms, specifically PCA, NCA, and LDA. The primary contributions of this study are: (1) Making use of all classes of handwritten digits in Bangla. (2) The application of feature extraction dimensionality reduction methods. (3) The use of a CNN model to categorize handwritten Bangla digits.

2 Literature Review There is an existing body of research on Bangla Handwritten Digit Recognition produced over several years. There also exists a substantial of research on dimensionality reduction. Traditional dimensionality reduction methods were developed based on intuitive criteria such as variance preservation (Principal Component Analysis— PCA), or distance preservation (classical multidimensional scaling—CMDS) [30]. In 2017, UCI lab data was utilized to identify trends using artificial neural networks (ANN), decision trees (DT), support vector machines (SVM), and Naive Bayes techniques [42]. Zhu et al. combine PCA and the K-means algorithm to anticipate the onset of diabetes [44], using the former for a dimensional reduction before using

Dimensionality Reduction in Handwritten Digit Recognition

37

K-means to cluster the features. The combination of these two strategies yields an improved precision rate. The potential of applying PCA clustering techniques on brain tumor image data was explored by Kaya et al. [14]. An excellent performance rate is achieved by combining PCA and K-means techniques. Hu et al. employ an integrated PCA-SVM technique to improve digital image recognition [20]. Dimensionality reduction is achieved with the PCA-Firefly technique according to an approach developed by Bhattacharya et al. [8]. Bhattacharya et al. presented a PCA-fireflybased deep learning system for diabetic retinopathy early detection [17]. The PCAfirefly algorithm chooses the essential traits, and then deep neural networks categorize the diabetes retinopathy raw data. In comparison with other machine learning algorithms, the proposed approach produced good classification results. Block PCA was improved in 2011 by Zheng et al. when the transform outputs were guaranteed to have the most significant variance where demanded [15]. In 2017, Zhou et al. investigated how Deep Belief Networks characteristics could be used to solve data processing problems in industrial control systems [39]. Their effort was intentionally focused on resolving dimensionality reduction challenges involving missing values. In another study published in July 2020 by Zhao et al., the sparse PCA was improved by making it adaptable; in this manner, a similar sparsity pattern could be obtained across all principal elements [19]. Seghouane et al. proposed a new technique that incorporates the self-paced learning structure in the direction of probabilistic principal component analysis to solve the problem of sensitivity to outliers [28]. U. Bhattacharya et al. collaborated in the collection of 12,938 Bangla handwritten digits written by 556 people and created a dataset at the Indian Statistical Institute’s CVPR Unit [9]. Basu et al. advanced Bangla Handwritten Digit Recognition techniques using Classifier Combination in 2005 [6]. For classification, the MLP classifier and DS techniques were used, achieving an accuracy of 95.1%. Pal et al. developed a further model for Bangla handwritten digit recognition utilizing the concept of water overflowing out of the reservoir and collecting topological and structural characteristics of the digits in 2006 [29]. Xu et al. (in 2008) created a hierarchical Bayesian network that uses the database photographic images as the network input and categorizes those from the bottom to the top [43]. With the use of a dataset containing 2,000 handwritten input data points, a recognition accuracy of 87.5% was achieved, which is judged to be average. In 2009 Liu et al. explored gradient direction feature extraction with the ISI dataset and achieved 99.4% accuracy [22]. Using a nonlinear SVM classifier (2013), Surinta et al. achieved 96.8% accuracy using a self-constructed dataset. Akhand et al. conducted two separate experiments on Bangla Handwritten Digit Recognition twice. One of the most widely used methods in digit recognition systems is CNN. In recent years, the majority of contemporary research has been constructed using various versions of CNN, which offer additional accuracy. In 2015, Akhand et al. used CNN with their own dataset which achieved 97.93% accuracy [32]. Shortly afterwards, these researchers used 3 different CNN models with the ISI dataset which achieved 98.80% accuracy [2]. The best accuracy was however achieved by Rahman et al.. They introduced the VGG-11M CNN architecture and tested it on three separate datasets which achieved 99.80%, 99.66%,

38

M. B. Mizan et al.

and 99.25% accuracy with the ISI, CMATERdb, and NUMTAdb datasets, respectively [33]. They utilized more than one hundred twenty thousand input data points consisting of Bangla numerals.

3 Methodology This section outlines the techniques used in our research, including the architecture and workflow for the Convolutional Neural Network (CNN) model for classification and data preparation using grayscaling, normalization, and dimensionality reduction.

3.1 Dataset Preprocessing For the first step of data preprocessing, the Numtadb dataset was downloaded from Kaggle then OpenCV was used to read the images and convert the RGB color images to grayscale images. The pixel values of images can range from 0 to 255. The computation of big numeric values becomes more difficult if we run the image through a neural network. The values are normalized so that they fall between 0 and 1 to lessen this effect. Then the dataset was split into train and test subsets. Eighty percent of our data was used for training and 20% of our data was used for testing the model. Then dimensionality reduction algorithms such as PCA, LDA, or NCA were applied to our data using the sklearn library with a random seed of None. Figure 1 shows the steps of dataset preprocessing.

Fig. 1 Data preprocessing steps which starts with data collection, grayscaling, normalization and ends with dimensionality reduction algorithms (PCA, NCA, LDA) utilization

Dimensionality Reduction in Handwritten Digit Recognition

39

Fig. 2 a RGB-colored images which are used as raw images and b convert color images to grayscale images

A. Grayscaling: Grayscaling is the process of converting a color picture from one or more color spaces, such as RGB, CMYK, or HSV, to various shades of gray. Grayscale images only have one color channel, but RGB images have three. Figure 2a displays the original photos, whereas Fig. 2b displays the grayscale versions. Images in the dataset come in various color schemes. Numbers in white, black, green, blue, orange, and other hues are among the photos. This is problematic because similar numbers with different characters might lead to confusion. Errors can happen while classifying colors. Since different colors have different RGB values, grayscale pictures will result. B. Normalization: In most picture data, the pixel values are integers having values ranging between 0 and 255. In neural networks that receive inputs with moderate weight values, large integer values may hinder or delay the learning process. Therefore, it makes sense to normalize the pixel values such that they are all included within the range of 0 to 1. Images having pixel values between 0 and 1 are legitimate and may be seen correctly. To do this, the largest pixel value, 255, is used to divide all other pixel values. No matter the pixel value range of the image, this is done. C. Dimensionality Reduction: The difficulty of visualizing the training set and then working on it increases with the number of features. The majority of these features are often connected and hence redundant. In this case, using dimensionality reduction techniques to reduce the number of variables in a classification or regression dataset might enhance the fit of a prediction model. Dimensionality decrease is the most common way of diminishing the quantity of irregular factors considered by creating a set of principal variables. Prior to modeling, dimensionality reduction is a data preprocessing technique utilized on data. This might be carried out after data cleaning and scaling but before a predictive model is trained. The following two examples illustrate two ways of applying the dimension reduction methodology: D. Feature Selection: The process of choosing the crucial features from a dataset and eliminating the unneeded ones in order to build an accurate model is known as feature selection. In other words, it is a method for determining which features in a dataset are the most important. E. Feature Extraction: A multidimensional space is transformed into a space with fewer dimensions through the process of feature extraction. When we want to keep all of the information yet evaluate it quickly, this approach is helpful. Our study focuses on three of these algorithms: • Principal Component Analysis (PCA)

40

M. B. Mizan et al.

• Linear Discriminant Analysis (LDA) • Neighborhood Component Analysis (NCA) I. Principal Component Analysis (PCA) Data standardization is the first step in the PCA process. The features of the dataset are then determined using the covariance matrix. The covariance matrix’s eigenvalues and eigenvectors are then determined, and the eigenvalues and related eigenvectors are sorted. A matrix of eigenvectors is created using the chosen k number of eigenvalues and then the original matrix is changed. To implement PCA, sklearn.decomposition, PCA from scikit-learn library was used. Because the input shape of the CNN model utilized was (32, 32, 1), the parameter n components was maintained at 1,024 so that the processed data could be rearranged (32, 32, 1). When this data is subjected to Principal Component Analysis (PCA), the most variance in the data is accounted for by a collection of attributes (principal components or directions in the feature space). It performs linear dimensionality reduction by projecting data to a lower-layered space using Singular Value Decomposition. The input data is centered but not scaled for every feature prior to applying the SVD. It employs the LAPACK execution of the full SVD or a randomized truncated SVD depending on the structure of the information input and the quantity of pieces to extract. It may also make use of scipy. sparse. linalg of the truncated SVD using linalg ARPACK. Images in this dataset have a 180 × 180 dimension, or 32,400 components. CNN’s Lenet-5 architecture requires a 32 × 32 input shape. Therefore, we decided to use PCA to keep 1,024 of the components. II. Linear Discriminant Analysis (LDA) Linear Discriminant Analysis (LDA) seeks to identify the characteristics that capture the most variance among features. LDA is a supervised method that makes use of known class labels, in contrast to PCA. The d-dimensional mean vectors are initially determined before the LDA process is complete. Following that, the scatter matrices are processed, and their summed eigenvalues are addressed. Later, the instances are switched to the new subspace LDA and those linear discriminants for the new component subspace are selected [4]. To implement LDA, sklearn.discriminant_analysis.LinearDiscriminantAnalysis from the scikitlearn library was used. This is a linear decision boundary classifier that uses Bayes’ rule to fit class conditional densities to the data. The model gives each class a Gaussian thickness, which accepts that all classes have a similar covariance matrix. By utilizing the transform approach, the fitted model might limit the input’s dimensionality by extending it to the most discriminative directions. In the instance of LDA, the number of components was adjusted to 4 since only 10 classes are allowed, and it was then zero-padded to form 32 × 32 input shape instead of 2 × 2 input shape. III. Neighborhood Component Analysis (NCA) Neighborhood Components Analysis (NCA), a distance metric learning technique, seeks to increase the accuracy of closest neighbor classification in comparison to the traditional Euclidean distance. The technique expands a stochastic variant of the leave-one-out k-closest neighbors (KNN) score on the preparation set. It can likewise

Dimensionality Reduction in Handwritten Digit Recognition

41

gain proficiency with a low-dimensional direct data projection for data visualization and fast classification. The input data is projected onto a linear subspace where the bearings that limit the NCA objective are characterized [18]. The most suitable dimensionality might be established using the boundary n components. In order to determine the feature space with the most notable exactness for a stochastic closest neighbor technique, Neighborhood Components Analysis (NCA) is used. Similar to LDA, it is a supervised approach. Images in this dataset have a 180 × 180 resolution, or 32,400 components. CNN’s Lenet-5 architecture requires a 32 × 32 input shape. Therefore, the number of components in this instance of NCA was fixed to 1,024.

3.2 CNN Architecture We utilized LeNet-5 as the CNN architecture for our research. LeNet is a type of Convolutional Neural Network (CNNs). These networks were often recognized as the initial actual convolutional neural networks. They are successful in locating small, grayscale, single-channel pictures. Three distinct networks comprised LeNet: • LeNet-1, a straightforward five-layer CNN. • LeNet-4, which has six layers and is an improvement over LeNet-1. • LeNet-5, the most widely used version with seven layers and an improvement over LeNet-4. The most well-known of the three networks was the LeNet-5, which was an improvement over LeNet-4, but rather with seven trainable layers. The network utilized, as input, a (32 × 32) image tensor and had an aggregate of 61,706 trainable parameters. Lenet-5 architecture is shown below in Fig. 3. In Fig. 3, C stands for the Convolutional layer, S for the Subsampling or Pooling layer, and F for the Fully connected layer. The network accepted a (32 × 32 × 1) image tensor as input. After the first convolution layer, it becomes 28 × 28× 6 and, after the first pooling layer, it becomes 14 × 14 × 6. Then again after the second convolution layer, it becomes

Fig. 3 LeNet-5 architecture (2 convolutional layers, 2 pooling layers, and 3 fully connected layers)

42

M. B. Mizan et al.

10 × 10 × 16 feature map and, after the second pooling layer, it becomes 5 × 5 × 16. Then it is flattened and, after the first fully connected layer, it becomes 120 × 1 and, after the second fully connected layer, it becomes 84 × 1. Then after the third fully connected layer, it becomes 10 × 1 and passed through a softmax activation function which contains the predictions of the network.

4 Dataset This study used the NumtaDB: Bangla Handwritten Digits dataset. The dataset is made up of six separate datasets gathered from diverse sources and at different times. Each of them, however, was thoroughly tested against the same criterion to verify that all digits could be read by a single human with no prior knowledge [7]. The sources are designated from “a” to “f”. Various subsets of the training and testing sets exist depending on the data source (training-a, testing-a, etc.). To ensure that handwriting from a same subject/patron does not appear in the two sets, all datasets have been separated into preparation and testing sets. This dataset includes more than 85,000 images of handwritten Bangla digits.

5 Result Analysis This section analyzes the experimental findings resulting from the use of the three algorithms introduced in the previous section. The effectiveness of these three algorithms with a range of datasets is also addressed. An average personal computer with 16GB of RAM and an Intel Core i9 CPU was used to run the experimental model in a Jupyter Notebook.

5.1 Performance Evaluation In order to validate and test our models, we use the NumtaDB dataset. There are almost 85,000 Bangla digits contained within this dataset. It was accurate to 73.38% in our CNN model (PCA), 47.25% in our CNN model (LDA), and 86.48% in our CNN model (NCA). Below are the Training and Validation accuracy and loss vs Epoch plots of the models (Figs. 4, 5 and 6):

Dimensionality Reduction in Handwritten Digit Recognition

Fig. 4 CNN model (PCA) training and validation a Accuracy and b Loss

Fig. 5 CNN model (LDA) training and validation a Accuracy and b Loss

Fig. 6 CNN model (NCA) training and validation a Accuracy and b Loss

43

44

M. B. Mizan et al.

Fig. 7 Confusion matrix for PCA, LDA and NCA using CNN

The plots of training and validation accuracy and training and validation loss show that the CNN model (PCA) and CNN model (NCA) perform better than the CNN model (LDA). I. Confusion matrix We are utilizing the tensorflow.math library’s confusion_matrix function, which creates a confusion matrix. A confusion matrix is a matrix where the columns correspond to the real labels and the segments correspond to the prediction labels. The algorithms’ Confusion Matrices is shown in Fig. 7.

5.2 Result Analysis with Other Datasets and Existing Work We validated the consistency of our models using a variety of publicly available Bangla Handwritten Digit datasets. The models were tested using the Ekush dataset [13] and the BanglaLekha-Isolated dataset [5], both of which include ten-digit classes. There are approximately 30,700 images in the Ekush collection. In Table 1, we compare our models with existing models. Figure 8 compares the performance of our models with the Ekush and Banglalekha-Isolated datasets in terms of accuracy: Given that NumtaDB is a dataset with a large number of images, applying dimensionality reduction models and CNN indicated that, despite the fact that the dimensional reduction decreased the number of features in the data, the accuracy of the CNN model (NCA) and CNN model (PCA) did not degrade significantly. With a large number of features, CNN performs better, but training data with a big number of features is computationally expensive and time-consuming. Consequently, reducing the number of features in the data using dimensionality reduction results in less time taken to train a large amount of data using classification algorithms such as CNN, SVM, KNN, etc. So, in a fixed duration, more data can be trained. Because there were fewer features, accuracy of our models was slightly lower than it was with other existing models, but it was still extremely close when NCA was used as a dimensionality reduction strategy. Figure 9 shows a comparison between our model and other models.

Dimensionality Reduction in Handwritten Digit Recognition

45

Table 1 Comparative analysis of existing models with our models Author Year Method Dataset Bhattacharya et al.[9]

2005

MQDF, MLP

Self constructed Dataset(12938 no of images) MLP classifier and DS ISI(6000 samples), technique CMATERdb

Basu et al. [6]

2005

Pal et al. [29]

2006

Xu et al. [43]

2008

Das et al. [10]

2012

Surinta et al. [41]

2013

Khan et al. [21]

2014

Akhand et al. [2] Our Model

2016 2022

A sparse representation classifier three different CNNs CNN model (PCA)

Our Model

2022

CNN model (LDA)

Our Model

2022

CNN model (NCA)

using the concept of water overflow from the reservoir A hierarchical Bayesian network Multilayer Perception (MLP) Pixel-Based Methods, SVM

Self constructed dataset (12000 samples) Self constructed Dataset(2000 samples) Self constructed dataset Self constructed Dataset (10920 samples) CMATERdb 3.1.1

ISI NumtaBD dataset (19700 samples) NumtaBD dataset (19700 samples) NumtaBD dataset (19700 samples)

Accuracy 95.80%

ISI: 95.1%, CMATERdb: 96.67% 92.8 %

87.5% 99.45% 96.8%

94%

98.80% 73.38 % 47.25 % 86.48 %

46

M. B. Mizan et al.

Fig. 8 Comparison of dimensionality reduction algorithms among NumtaDB, Ekush, BanglalekhaIsolated datasets

Fig. 9 Comparison chart of our models with previously studied models

Dimensionality Reduction in Handwritten Digit Recognition

47

6 Conclusion In this study, CNN was used with several dimensionality reduction methods to classify Bangla handwritten digits. We also applied our models with two other datasets and contrasted them to gain a thorough understanding of their relative performance. Our models performed well, however, due to structural similarities, they wrongly classified certain digits. We discovered throughout our research that we needed to limit the number of features for LDA to just 4. Therefore, we did not achieve a favorable outcome from CNN. We were unable to run all of the data from NumtaDB through our models due to limited configuration computers. Out of a total of 85,000+ photographic images, we only used 19,703 in total. Images in this dataset have a 180 × 180-pixel size, or 32,400 components. The required CNN input shape for the Lenet-5 architecture is 32 × 32. As a result, we decided to use PCA and NCA to retain 1,024 of the components. However, for LDA, the number of features was adjusted to 4 because only 10 classes were allowed, and it was then zero-padded to produce 32 × 32 instead of 2 × 2. CNN is used to train this reduced dataset. The findings show that our model’s performance using NCA is superior to that of PCA and LDA. In the future, the models can be implemented for all the images of NumtaDB. For improved performance, other dimensionality reduction methods as well as other classification algorithms or CNN topologies will be utilized for feature extraction and classification.20.

References 1. Adiba FI, Islam T, Kaiser MS, Mahmud M, Rahman MA (2020) Effect of corpora on classification of fake news using Naive Bayes classifier. Int J Autom Artif Intell Mach Learn 1(1):80–92. https://researchlakejournals.com/index.php/AAIML/article/view/45, number: 1 2. Akhand MAH, Ahmed M, Rahman MH (2016) Multiple convolutional neural network training for bangla handwritten numeral recognition. In: 2016 international conference on computer and communication engineering (ICCCE), pp 311–315 3. Angermueller C, Pärnamaa T, Parts L, Stegle O (2016) Deep learning for computational biology. Mol Syst Biol 12(7):878 4. Avella JCG (2021) Using linear discriminant analysis (lda) for data explore: Step by step. https:// apsl.tech/en/blog/using-linear-discriminant-analysis-lda-data-explore-step-step/. Aaccessed 18.08.2021 5. BanglaLekha-Isolated: Banglalekha-isolated-numerals | kaggle. https://www.kaggle.com/ ipythonx/banglalekhaisolatednumerals (accessed: 26.11.2021) 6. Basu S, Sarkar R, Das N, Kundu M, Nasipuri M, Basu DK (2006) Handwritten bangla digit recognition using classifier combination through ds technique. In: International conference on pattern recognition and machine intelligence. Springer, pp 236–241 7. Bengali.AI: Numtadb: Bengali handwritten digits | kaggle. https://www.kaggle.com/ BengaliAI/numta. Accessed from 18 Aug 2021 8. Bhattacharya S, Somayaji S, Reddy P, Kaluri R, Singh S, Gadekallu T, Alazab M, Tariq U (2020) A novel pca-firefly based xgboost classification model for intrusion detection in networks using GPU. Electronics 9:219. https://doi.org/10.3390/electronics9020219

48

M. B. Mizan et al.

9. Bhattacharya U, Chaudhuri BB (2005) Databases for research on recognition of handwritten characters of Indian scripts, pp 789–793. https://doi.org/10.1109/ICDAR.2005.84 10. Das N, Sarkar R, Basu S, Kundu M, Nasipuri M, Basu DK (2012) A genetic algorithm based region sampling for selection of local features in handwritten digit recognition application. Applied soft computing. https://doi.org/10.1016/j.asoc.2011.11.030 11. Das S, Yasmin MR, Arefin M, Taher KA, Uddin MN, Rahman MA (2021) Mixed BanglaEnglish spoken digit classification using convolutional neural network. In: Mahmud M, Kaiser MS, Kasabov N, Iftekharuddin K, Zhong N (eds) Applied intelligence and informatics. Communications in computer and information science. Springer International Publishing, Cham, pp 371–383 . https://doi.org/10.1007/978-3-030-82269-9_29 12. Das TR, Hasan S, Sarwar SM, Das JK, Rahman MA (2021) Facial spoof detection using support vector machine. In: Kaiser MS, Bandyopadhyay A, Mahmud M, Ray K (eds) Proceedings of TCCE. Advances in intelligent systems and computing. Springer, Singapore, pp 615–625. https://doi.org/10.1007/978-981-33-4673-4_50 13. Ekush (2021) Ekush: Bangla handwritten data - numerals | kaggle. https://www.kaggle.com/ ipythonx/ekush-bangla-handwritten-data-numerals. Accessed from 26 Nov 2021 14. Ersöz Kaya I, Çakmak Pehlivanl A, Sekizkarde¸s E, Ibrikci T (2017) Pca based clustering for brain tumor segmentation of t1w mri images. Comput Methods Prog Biomed. https://doi.org/ 10.1016/j.cmpb.2016.11.011 15. Feng CM, Gao YL, Liu JX, Zheng CH, Li SJ, Wang D (2016) A simple review of sparse principal components analysis, vol 9772, pp 374–383. https://doi.org/10.1007/978-3-319-42294-7_33 16. Ferdous H, Siraj T, Setu SJ, Anwar MM, Rahman MA (2021) Machine Learning Approach Towards Satellite Image Classification. In: Kaiser MS, Bandyopadhyay A, Mahmud M, Ray K (eds) Proceedings of TCCE. Advances in intelligent systems and computing. Springer, Singapore, pp 627–637. https://doi.org/10.1007/978-981-33-4673-4_51 17. Gadekallu TR, Khare N, Bhattacharya S, Singh S, Maddikunta PKR, Ra IH, Alazab M (2020) Early detection of diabetic retinopathy using pca-firefly based deep learning model. Electronics 9(2). https://doi.org/10.3390/electronics9020274, https://www.mdpi.com/2079-9292/9/2/274 18. Goldberger J, Hinton GE, Roweis S, Salakhutdinov RR (2004) Neighbourhood components analysis. In: Saul L, Weiss Y, Bottou L (eds) Advances in neural information processing systems, vol. 17. MIT Press. https://proceedings.neurips.cc/paper/2004/file/ 42fe880812925e520249e808937738d2-Paper.pdf 19. Han X, Peng J, Cui A, Zhao F (2020) Sparse principal component analysis via fractional function regularity. Math Prob Eng 2020:1–10. https://doi.org/10.1155/2020/7874140 20. Hu L, Cui J (2019) Digital image recognition based on fractional-order-pca-svm coupling algorithm. Measurement 145. https://doi.org/10.1016/j.measurement.2019.02.006 21. Khan HA, Helal A, Ahmed K (2014) Handwritten bangla digit recognition using sparse representation classifier. https://doi.org/10.1109/ICIEV.2014.6850817 22. Liu CL, Suen CY (2009) A new benchmark on the recognition of handwritten bangla and farsi numeral characters. Pattern Recognit 42(12):3287–3295. https://doi.org/10.1016/j. patcog.2008.10.007, https://www.sciencedirect.com/science/article/pii/S0031320308004457, new Frontiers in Handwriting Recognition 23. Mahmud M, Kaiser MS, Rahman MM, Rahman MA, Shabut A, Al-Mamun S, Hussain A (2018) A brain-inspired trust management model to assure security in a cloud based IoT framework for neuroscience applications. Cognit Comput 10(5):864–873 24. Mahmud M, Kaiser MS, Rahman MA (2022) Towards Explainable and Privacy-Preserving Artificial Intelligence for Personalisation in Autism Spectrum Disorder. In: Antona M, Stephanidis C (eds) Universal access in human-computer interaction. User and context diversity. Lecture notes in computer science. Springer International Publishing, Cham, pp 356–370. https://doi.org/10.1007/978-3-031-05039-8_26 25. Nasrin F, Ahmed NI, Rahman MA (2021) Auditory attention state decoding for the quiet and hypothetical environment: a comparison between bLSTM and SVM. In: Kaiser MS, Bandyopadhyay A, Mahmud M, Ray K (eds) Proceedings of TCCE. Advances in intelligent systems and computing. Springer, Singapore, pp 291–301. https://doi.org/10.1007/978-981-33-46734_23

Dimensionality Reduction in Handwritten Digit Recognition

49

26. Natarajan P, Frenzel JC, Smaltz DH (2017) Demystifying big data and machine learning for healthcare. CRC Press 27. Nawar A, Toma NT, Al Mamun S, Kaiser MS, Mahmud M, Rahman MA (2021) Cross-content recommendation between movie and book using machine learning. In: 2021 IEEE 15th international conference on application of information and communication technologies (AICT), pp 1–6. https://doi.org/10.1109/AICT52784.2021.9620432 28. Ogbuanya CE (2021) Improved dimensionality reduction of various datasets using novel multiplicative factoring principal component analysis (mpca) abs/2009.12179 29. Pal U, Chaudhuri BB, Belaid A (2006) A complete system for bangla handwritten numeral recognition. IETE J Res 52(1):27–34 30. Peluffo D, Lee J, Verleysen M (2014) Recent methods for dimensionality reduction: a brief comparative analysis 31. Rahman MA, Brown DJ, Mahmud M, Shopland N, Haym N, Sumich A, Turabee ZB, Standen B, Downes D, Xing Y et al (2022) Biofeedback towards machine learning driven self-guided virtual reality exposure therapy based on arousal state detection from multimodal data 32. Rahman MM, Akhand MAH, Islam S, Shill PC, Rahman MMH (2015) Bangla handwritten character recognition using convolutional neural network. MECS. https://doi.org/10.5815/ ijigsp.2015.08.05 33. Rahman MM, Islam MS, Sassi R, Aktaruzzaman M (2019) Convolutional neural networks performance comparison for handwritten bengali numerals recognition. SN applied sciences. https://doi.org/10.1007/s42452-019-1682-y 34. Rahman MA (2018) Gaussian process in computational biology: covariance functions for transcriptomics. PhD University of Sheffield. https://etheses.whiterose.ac.uk/19460/ 35. Rahman MA, Brown DJ, Shopland N, Burton A, Mahmud M (2022) Explainable multimodal machine learning for engagement analysis by continuous performance test. In: Antona M, Stephanidis C (eds) Universal access in human-computer interaction. User and context diversity. Lecture notes in computer science. Springer International Publishing, Cham, pp 386–399. https://doi.org/10.1007/978-3-031-05039-8_28 36. Rahman MA, Brown DJ, Shopland N, Harris MC, Turabee ZB, Heym N, Sumich A, Standen B, Downes D, Xing Y, Thomas C, Haddick S, Premkumar P, Nastase S, Burton A, Lewis J, Mahmud M (2022) Towards machine learning driven self-guided virtual reality exposure therapy based on arousal state detection from multimodal data. In: Mahmud M, He J, Vassanelli S, van Zundert A, Zhong N (eds) Brain informatics. Springer International Publishing, Cham, pp 195–209 37. Rahman MA, Scurtu V (2008) Performance maximization for question classification by subset tree kernel using support vector machines. In: 2008 11th international conference on computer and information technology, pp 230–235. https://doi.org/10.1109/ICCITECHN.2008.4802979 38. Rakib AB, Rumky EA, Ashraf AJ, Hillas MM, Rahman MA (2021) Mental healthcare chatbot using sequence-to-sequence learning and bilstm. In: Mahmud M, Kaiser MS, Vassanelli S, Dai Q, Zhong N (eds) Brain informatics. Springer International Publishing, Cham, pp 378–387 39. Ruangkanokmas P, Achalakul T, Akkarajitsakul K (2016) Deep belief networks with feature selection for sentiment classification, pp 9–14. https://doi.org/10.1109/ISMS.2016.9 40. Sadik R, Reza ML, Al Noman A, Al Mamun S, Kaiser MS, Rahman MA (2020) Covid-19 pandemic: a comparative prediction using machine learning. Int J Autom Artif Intell Mach Learn 1(1):1–16 41. Surinta O, Schomaker L, Wiering M (2013) A comparison of feature and pixel-based methods for recognizing handwritten bangla digits. IEEE 42. Untoro M, Praseptiawan M, Widianingsih M, Ashari I, Afriansyah A (2020) Oktafianto: evaluation of decision tree, k-nn, naive bayes and SVM with mwmote on UCI dataset. J Phys: Conf Ser 1477:032005. https://doi.org/10.1088/1742-6596/1477/3/032005 43. Xu JW, Xu J, Lu Y (2008) Handwritten bangla digit recognition using hierarchical bayesian network. IEEE 44. Zhu C, Idemudia CU, Feng W (2019) Improved logistic regression model for diabetes prediction by integrating PCA and k-means techniques. Informatics in Medicine Unlocked

50

M. B. Mizan et al.

45. Zuo C, Qian J, Feng S, Yin W, Li Y, Fan P, Han J, Qian K, Chen Q (2022) Deep learning in optical metrology: a review. Light: Sci Appl 11(1):1–54

Obtaining Fractal Dimension for Gene Expression Time Series Using an Artificial Neural Network Marco Antonio Esperón Pintos, Jorge Velázquez Castro, and Benito de Celis Alonso

Abstract In this work, a stochastic dynamic model of a minimal gene regulatory network is used to simulate the characteristic dynamics of protein concentration. By changing the model’s parameters [11], it will be possible to simulate different cellular conditions that will be used for training some artificial neural networks that recognize the cellular state of disease of health with the information in the protein concentration series [4]. In particular, the hurst exponent and the fractal dimension of the signal will be analyzed. The hurst exponent is relevant in diagnosis as it determines the autocorrelation of the time series and allows different labeling states of cellular health. Some studies have shown that bacteria such as E. Coli and fungi such as S. Cerevisiae show a healthy cellular state when the time series of transcription factors have a hurst exponent greater than 0.5 [3]. This research evaluates the efficiency and feasibility of using an artificial neural network to diagnose cellular states by means of the dynamics of protein concentrations. Keywords Gene expression · Stochastic processes · Artificial neural networks · Complex systems

The author MAEP was supported by the National Council of Science and Technology (CONACYT) to carry out this work, through a scholarship for postgraduate studies. M. A. E. Pintos (B) Programa de Maestría en Ciencias Física Aplicada, Facultad de Ciencias Físico Matemáticas, Benemérita Universidad Autónoma de Puebla, 72001 Puebla, Mexico e-mail: [email protected] J. V. Castro · B. de Celis Alonso Facultad de Ciencias Físico Matemáticas, Benemérita Universidad Autónoma de Puebla, 72001 Puebla, Mexico e-mail: [email protected] B. de Celis Alonso e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_4

51

52

M. A. E. Pintos et al.

1 Introduction Cells are part of a complex system that is affected by physical parameters such as temperature and osmotic pressure. They also sense nutrients and harmful chemicals. For example, when cells sense sugar, they produce proteins that transport it into the cell. When these proteins are damaged, the cell produces repair proteins. Thus, the cell continuously monitors its environment and calculates the required amount of each type of protein [1]. This information processing function, which determines each protein’s production rate, is carried out by transcription networks. It is of utmost importance to understand these dynamics, as they indicate the state of cellular health and the correct functioning of the cells. The cell uses proteins called transcription factors (TFs) as a flagship to obtain information from the environment. Transcription factors [1, 3] are designed to rapidly transit between active and inactive molecular states at a rate modulated by a specific environmental signal. Each active transcription factor can bind to DNA to regulate the rate at which particular genes are read. The translation and production of proteins is known as gene expression, i.e., the process of protein synthesis from the activation of a specific gene (see Fig. 1). We can conclude that gene expression is the process by which genotype information gives rise to the phenotype (observable characteristics). Therefore, in order to understand gene expression dynamics, it is necessary to investigate the statistical properties of experimentally measured data (time series) (see Fig. 2). Having efficient methods for analyzing signals (generated as time series of protein concentrations) [2] is of great utility for diagnosing certain diseases, such as cancer, which are expressed at

Fig. 1 Gene expression regulatory network [5]

Obtaining Fractal Dimension for Gene Expression Time Series …

53

Fig. 2 Time series of E. Coli TF

the cellular level. According to several investigations of gene expression that have been carried out [2], it is observed that the time series of protein concentrations have a fractal behaviour, i.e. they are invariant to changes in scale. These time series have particular statistical properties such as the Hurst exponent and the fractal dimension [8]. The hurst exponent tells us the level of autocorrelation of the time series. They also indicate the state of cellular health. Several investigations [3] have shown that healthy cells show in their gene expression time series a hurst exponent greater than 0.5, i.e. the time series in healthy cells are more autocorrelated than the time series in diseased cells. In this work a stochastic dynamic model of a minimal gene regulatory network will be emulated computationally to simulate the characteristic dynamics of protein concentration. By changing the model parameters, it will be possible to simulate different cellular conditions that will subsequently be useful for training an artificial neural network that recognizes the properties of the series in the corresponding cellular state. In particular, the fractal dimension of the signal will be analyzed. This research will allow us to evaluate the efficiency and feasibility of using an artificial neural network that diagnoses cellular states by means of the dynamics of protein concentrations.

54

M. A. E. Pintos et al.

2 Materials and Methods According to [3] the four most important features of variation in gene expression dynamics include: • The inherent stochasticity of biochemical processes that depend on large numbers of molecules. • The differences in the internal states of cells. • Subtle environmental differences. • Genetic mutations. Because stochastic processes are essential for the analysis and understanding of gene expression dynamics, this topic will be addressed in the following section.

2.1 Protein Concentration Time Series According to [2] a time series is a set of data measured at certain points in time and ordered chronologically. From a probabilistic point of view, a time series is a succession of random variables indexed according to an increasing parameter over time. In this paper we consider protein concentration time series as biological systems that are affected by random quantities and cannot be explained by deterministic mathematical formalisms protein concentration time series can be mathematically represented as stochastic processes and are described as a family of functions (see Fig. 2). Each function represents an experimental realization. Therefore we can write a stochastic process ξ(t) as follows. ξ(t) = [ξ1 (t), ξ2 (t), ..., ξn (t)]

(1)

To define a stochastic process it is necessary to assign a probability distribution or probability density to each realization of the assembly under study. Each probability function tells us about the occurrence of the different realizations. Let us denote these probability distribution functions in the following way F(x; t) = [F1 , F2 , ..., Fn] = P[ξ(t) ≤ x]

(2)

where P[ξ(t) ≤ x] is the probability that the process is equal to or less than a given point x. We can conclude that F(∞; t) = 1. By obtaining a probability distribution function we can find a probability density function (PDF) at time t as follows f (x; t) =

∂ F(x; t) ∂x

(3)

Obtaining Fractal Dimension for Gene Expression Time Series …

55

According to [3] time series of gene expression have a fractal characteristic. We can say that ξ(t) is a conventional time series if ξ(t) goes from R1 to R. On the other hand, ξ(t) is a fractal time series if it can be considered as a function which image is into R1+d for 0 < d < 1. However, it will be only the integer part belonging to R1 that we will see in any graph of ξ ∈ R 1+d . On the other hand, the fractional part of ξ(t) is what makes it different from a conventional series with respect to its Probability Distribution Function (PDF) and Autocorrelation Function (ACF) [6]. The autocorrelation function allows us to assign a numerical value to the dependence or relationship that one point of the process has with another at a later time. That is, this quantity is absorbing deterministic information that is hidden in noisy signals. We denote it as follows  B(t1 , t2 ) = ξ(t1 )ξ(t2 ) =





−∞



−∞

x1 x2 f (x1 , x2 ; t1 , t2 )d x1 d x2

(4)

the brackets enclosing the product of the stochastic process at different times in the above equation represent an average. The average of a stochastic process is defined as follows  ∞ x f (x; t)d x (5) μ(t) = ξ(t) = −∞

In this article we are interested in knowing how a stochastic process (time series) is synthetically generated. A stationary conventional time series y(t) can be taken as a solution function of a stochastic differential equation under stationary white noise excitation w(t). p  i=0

ai

d p−i y(t) = w(t) dt p−1

(6)

Let us denote by g(t) the green function or momentum function of a linear stochastic differential equation and write y(t) as follows.  t g(t − τ )w(τ ) dτ = g ∗ w (7) y(t) = 0

The convolution of g and w is denoted by g ∗ w which as mentioned, is solution of a stochastic differential equation. g(t) can take different forms depending on the signal, that is, the model to be analyzed. Similarly, a fractal time series is a solution of a fractional stochastic differential equation. Let ν > 0, then for t > 0 we denote Dt−v as the Riemann-Liouville integral operator of order ν that is defined as Dt−v f (t) =

1 (ν)

 0

t

(t − u)ν−1 w(u)du

(8)

56

M. A. E. Pintos et al.

where  is the Gamma function and Dt−v will be solution of a fractional stochastic differential equation. To simplify the notation, let us define the fractional differential equation as p 

a p−i D vi f (t) = w(t).

(9)

i=0

With ai constants and ν p , ν p−1 , ..., ν0 a positive decreasing number sequences. A fractal time series which results from a fractional Brownian motion (fBm) can emulate a minimal stochastic model of protein concentration in cells [7]. Substituting ν for H + 0.5 in equation (8) for 0 < H < 1, where H is the Hurst parameter, then fbm is defined, using the Riemann-Liouville integral operator as. B H (t) =

−(H + 21 )  B (t) 0 Dt

1 = (H + 21 )



t

(t − u)(H − 2 ) d B(u) 1

(10)

0

The Hurst exponent is a parameter bounded between 0 and 1. The value of this parameter gives us an idea of the correlation that each time series has. • H = 0.5 (White Noise) Completely random and independent process, with no correlation between signal increments. • 0.5 < H ≤ 1 (Black Noise) Time series showing persistent or correlated processes (one period of growth is followed by another analogous period) and has a “smooth” appearance. • 0 ≤ H < 0.5 (Pink Noise) Corresponds to an anti-persistence or anti-correlation behavior in the time series (one period of growth is followed by another period of decline) that is characterized by a higher high-frequency content. We can express the fractal dimension as D =2− H

(11)

The higher the Hurst exponent is greater than 0.5, the more self-similar the series is (see Fig. 3). The self-similarity of the stationary process is a concept closely related to fractal time series [9]. One of the tools used to synthetically generate the data (time series) of the minimal gene expression model is the “FBM” library which is indexed in the Python PyPI repository. The “fbm” method of the “FBM” library allows generating time series with different hurst exponents. When generating the time series, an algorithm that randomly selects the hurst exponents without repetition was implemented to create a database that simulates different cell states i.e. Hurst exponents greater than 0.5 for healthy cells and less than 0.5 for diseased cells. The values of each time series were sorted into lists and labeled with their respective hurst parameters to subsequently train three neural networks with different configurations. Four lists of time series were created, each list contains 1000 series and

Obtaining Fractal Dimension for Gene Expression Time Series …

57

Fig. 3 Time series of fractal brownian motion with different hurst exponents

each series represents an experimental realization consisting of 500 points. Of the 4 lists, one list contains the time series, the next list contains the series in a frequency space [10], and in the remaining two lists the data were normalized to the time and frequency domain. It is important to note that each data set is independent of the other i.e. each of the four data sets was randomly selected at different times. Once the labeled data were obtained, we proceeded to the implementation of a dense neural network, a convolutional one (CNN) and a LSTM using machine learning. Each of the three neural networks was tested with the four lists mentioned above, so there were 12 different configurations. The programming of the algorithms was performed with the help of Keras version 2.8.0. This programming interface runs through TensorFlow and is focused on machine learning and deep learning, facilitating the programming of artificial neural networks. The steps that were carried out when training each of the neural networks were as follows • Separate test and training data. Taking into account the overall amount of data, the separation was 10% and 90% respectively. • Train the model and perform tests with different training epochs. • Evaluate the error between the test data and the data predicted by the model.

3 Results The dense neural network and LSTM that were used have 8 neurons in the input layer, 8 neurons in the intermediate layer and one neuron in the output layer. For the convolutional neural network (CNN), 4 unidimensional filters or windows of 10

58

M. A. E. Pintos et al.

Table 1 Dense and LSTM neural network configuration, respectively Class Input layer First layer Activation neurons neurons function Dense LSTM

8 8

8 8

Relu Relu

Output layer neurons 1 1

Table 2 Convolutional Neural Network Configuration Class

Filters or windows

Filters input

Activation Linked function neural networks

First layer Output neurons layer neurons

First layer Output activation layer function activation function

CNN

4

10

Relu

8

Relu

Dense

1

Sigmoid

Table 3 Cost function results for the neural networks used after training with a certain number of epochs. The last column specifies the domain of the time series Class Cost function (MAE) Epochs Time series domain Dense Dense LSTM CNN CNN CNN CNN

0.2 0.13 0.061 0.03 0.35 0.073 0.38

100 100 20 15 15 200 200

Time Frequency Time Time Frequency Time Frequency

inputs were used, which are subsequently linked to a dense network of 8 neurons in the intermediate layer and 1 neuron in the output layer. The configuration of the three neural networks that were used is shown in the following tables (Tables 1 and 2). The number of neurons assigned in the algorithms was determined after performing several simulations. It was observed that in a range smaller and also larger than eight neurons the “cost function verses epochs” plots were similar to those corresponding to eight neurons. These graphs were also the criteria for determining whether or not it was necessary to train the networks with more epochs. No more epochs were assigned to training when the “cost function verses epochs” plots remained constant. The cost function used was the mean absolute error (MAE) and the results obtained are shown in the following table (Table 3 and Figs. 4, 5, 6).).

Obtaining Fractal Dimension for Gene Expression Time Series …

59

Fig. 4 Plots obtained for a Dense neural network

Fig. 5 Plots obtained for a LSTM neural network

4 Conclusions It can be concluded that artificial neural networks are efficient in analyzing time series of gene expression and predicting the Hurst exponent. Considering the mean absolute error of each one, it can be observed that the most efficient is the convolutional neural network (CNN) when trained with the time series in the time domain (see Fig. 6).

60

M. A. E. Pintos et al.

Fig. 6 Plots obtained for a Convolutional neural network

Therefore, it appears to be the best candidate for future predictions for which more than one intermediate layer could be used (deep learning). This can improve the precision in situations where measurement noise can be a problem. The convolutional network used, despite its simplicity, is able to determine the Hurst exponent and the fractal dimension of the gene expression time series. This opens the possibility of fast cellular disease diagnosis for early detection of diseases such as stero sclerosis and cancer.

References 1. Alon U (2019) An introduction to systems biology: design principles of biological circuits, 2nd edn. CRC Press, Boca Raton, Fla 2. Falk M (2006) A first course on time series analysis. University of Wurzburg, University of Wurzburg 3. Ghorbani M, Jonckheere EA, Bogdan P (2018) Gene expression is not random: scaling, longrange cross-dependence, and fractal characteristics of gene regulatory networks. Front Physiol 9:1446 4. Kirichenko L, Bulakh V, Radivilova T (2020) Machine learning classification of multifractional brownian motion realizations. In: CMIS 5. Latchman DS (1996) Inhibitory transcription factors. Int J Biochem Cell Biol 28(9):965–974 6. Li M (2010) Fractal time series-a tutorial review. Math Prob Eng 2010:1–26 7. Paxson V (1997) Fast, approximate synthesis of fractional Gaussian noise for generating selfsimilar network traffic. ACM SIGCOMM Comput Commun Rev 27(5):5–18

Obtaining Fractal Dimension for Gene Expression Time Series …

61

8. Plazas Nossa L, Ávila Angulo MA, Moncada Méndez G (2014) Estimacién del exponente de HURST y dimensién fractal para el análisis de series de tiempo de absorbancia UV-VIS. Ciencia e Ingeniería Neogranadina 24(2):133 9. Røine TB, Holter EK (2018) Properties of the gold price: an investigation using fractional Brownian motion and supervised machine learning techniques. Master’s thesis 10. Åström KJ, Murrayn RM (2008) Feedback systems: an introduction for scientists and engineers. Princeton University Press, Princeton. oCLC: ocn183179623 11. Xiao W, Zhang W, Xu W (2011) Parameter estimation for fractional Ornstein-Uhlenbeck processes at discrete observation. Appl Math Model 35(9):4196–4207

Grouping by Mixture of Normals for Breast Cancer in Two Groups, Benign and Malignant Gerardo Martínez Guzmán, María Beatriz Bernábe Loranca, Rubén Martínez Mancilla, Carmen Cerón Garnica, and Gerardo Villegas Cerón

Abstract The diagnosis of cancer cells from biopsies is mainly based on the analysis of the morphological changes of the nuclear structure as the increase in nuclear size, which probably occurs due to the deregulation of cell cycle, as well as the cell growth. The increase of the nuclear size is observed in biopsies of patients with benign and malignant diagnosis. A radius_mean variable (mean of distances from the center to points on the perimeter), related with the increase of nuclear size in patients with benign and malignant diagnosis, is studied in this work. An analysis of this variable proves, by the algorithm of unsupervised learning, Expectation–maximization (EM). That said variable has a mixture of normals with two components type behavior. Such an algorithm is able to discriminate the data in two groups (malignant and benign), the model shows a 97.8% of coincidence for benign cases and 66.5% for malignant cases. Keywords Estimators · Standard error · Breast cancer · Mixture of normals · Likelihood function

G. M. Guzmán · M. B. B. Loranca · R. M. Mancilla · C. C. Garnica (B) · G. V. Cerón Benemérita Universidad Autónoma, San Claudio CU Col. San Manuel, de Puebla Av. 14, 72570 Puebla, Mexico e-mail: [email protected] G. M. Guzmán e-mail: [email protected] M. B. B. Loranca e-mail: [email protected] R. M. Mancilla e-mail: [email protected] G. V. Cerón e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_5

63

64

G. M. Guzmán et al.

1 Introduction Breast cancer is, in terms of incidence, affects women more, being diagnosed between 27 and 30%, and it is positioned as the second cancer in the number of deaths behind lung cancer. In the United States of America, 268,670 breast cancer cases were detected and 41,400 deaths were due to breast cancer in 2018. A decade earlier, 194,280 cases were detected and 40,610 deaths due to breast cancer, resulting in an increase of 38.29% in cases, but only a 1.95% in deaths, due to the fact that in recent years, an effort has been made to detect breast cancer on time [1]. A better prognosis of survival is when breast cancer is detected on time and it has not metastasized. In 2018, it was estimated that the majority of cases (62%) that were detected when they had not yet metastasized had a survival of 5 years of 99%, meanwhile that those that already had metastasized had a 90% and in 10 years 83%. There are cellular changes that reflect the beginning of the disease or progression of breast cancer. In this case, the cell nucleus was the focus, being an organelle with measures between 10 and 20 μm. The size of the nucleus directly impacts the migration, the different nuclear proteins play an important role in the control and regulation of the size of the nucleus and cancer structure. The changes in the nuclear structure impact the metastasis mediating cell migration. If the regulation channels or the nuclear proteins are diminished as well as the protein Emerin, this results in a decrease in the size of the nucleus that will lead to a deregulation and therefore to cancer [2]. The distributions of finite mixtures have been employed for modeling heterogeneous data, since in several cases to explain the distribution of some data through a unique statistical distribution is not enough, for which the use of a combination of distributions is necessary. That is to say, mixed combinations are utilized for modeling data that in many experimental situations can be interpreted as coming from two or more sub-populations. The obtainment of these components necessarily drives to the estimation of the parameters and the proportions in which each one of the components contributes to the general distribution, see [3, 4]. This concept leads to the grouping of the conjunction of observations in groups with some common characteristics. The EM algorithm (Expectation–maximization), see [5, 6], is an iterative usual tool for the estimation of the maximum likelihood of the mixed distributions. The idea is to introduce a multinomial indicator variable that identifies the belonging to a group of each observation of the data conjunction. The EM algorithm was exposed by Arthur Dempster, Nan Laird and Donald Rubin of the Royal Statistical Society in a publication of 1977 [1].

Grouping by Mixture of Normals for Breast Cancer in Two Groups …

65

2 Data Analysis The data available for the study is a sample of 569 women, created by Dr. William H. Wolberg, a doctor at the UW Health University Hospital in Madison, Wisconsin, EE. UU. [7]. Each one of the women in the sample is identified with an ID, the diagnosis if the tumor is benign or malignant and the value of the radius_mean variable, among others (Figs. 1 and 2). Of the total of the sample, there are 357 cases that resulted benign and 212 malignant cases, see [8]. Some data of the database are shown in Table 1 [9, 10]. The histogram of a sample provides an estimation of the form of density function. However, it is not the density, but from the point of view non-parametric it can be seen as a reasonable estimation of the same. Then, if the radius_mean variable is considered and it is talent into account from the 569 total observations, 357 observations indicate the absence of cancer cells, meanwhile that 212 observations show the presence of cancer cells, the frequency histogram of the variable with the diagnosis presents a form of mixture of two normals as shown in Fig. 3. Observing the frequency histogram of the sample, a mixture of normals type behavior is obtained, for which the work is developed considering that the Fig. 1 Radius mean: 7.76

Fig. 2 Radius mean: 20.6

66 Table 1 Database

Fig. 3 Mixture of normal

G. M. Guzmán et al.

ID

Diagnosis Benign (B), Malignant Radius_mean (M)

8,612,399

M

18.46

86,135,501 M

14.48

86,135,502 M

19.02

861,597

B

12.36

861,598

B

14.64

861,648

B

14.62

861,799

M

15.37

861,853

B

13.27

862,009

B

13.45

862,028

M

15.06

86,208

M

20.26

86,211

B

12.18

862,261

B

862,485

B

11.6

862,548

M

14.42

862,717

M

13.61

862,722

B

862,965

B

862,980

B

862,989

B

9.787

6.981 12.18 9.876 10.49

Grouping by Mixture of Normals for Breast Cancer in Two Groups …

67

radius_mean variable has a probability distribution completely specified, namely, a mixture of two normal distributions, where an estimation of its parameters will be found, through the Expectation–Maximization (EM) statistical method, and the bootstrap technique is applied to calculate the standard error of these estimators. Once having the estimations, it will be analyzed to what extent the radius_mean variable can give information to predict if the diagnosis of breast cancer is benign or malignant, since such algorithm is able to discriminate the data in groups.

2.1 EBM Algorithm In the development of the EM algorithm, a parametric formulation will be provided for the representation of the model. Hereafter, through Y = (Y1 , Y2 , ..., Yn ), a random sample of size n will be denoted, where Yi is a random vector p-dimensional with function of density f (yi ) where yi ∈ R p . Thus y = (y1 , y2 , ..., yn ) represents a sample observed of Y. Definition 1 If the function of density is a random variable Yi , it is of the form. f (yi |ψ) =

g 

πk f k (yi |θ k ) yi ∈ R p

k=1

it is said that it possesses a distribution of finite mixture with g components, with a parameter ψ = (π1 , ..., πg , θ 1 , ..., θ g ). f k (yi |θ k ) , k = 1, 2, . . . , g θ k π1 , . . . , πg f k (yi |θ k ) For the mixture to be a function of density, the weights must fulfill the conditions, 0 ≤ π k ≤ 1k = 1, . . . , g and

g 

πk = 1

k=1

Note that in the previous condition, one of the weights results is redundant (one of them is expressed in terms of the others). Definition 2 Be y = (y1 , y2 , ..., yn ) independent observations of a random variable, whose density function f (y|ψ) is a mixture, then the function. L(ψ| y) =

n  i=1

f ( y i |ψ) =

g n  

π k f k ( y i |θ k )

i=1 k=1

Receives the name in the function of likelihood of the mixture.

68

G. M. Guzmán et al.

Taking the natural logarithm in L(ψ|y) , its function log-likelihood is obtained l(ψ|y) = log L(ψ|y) = log =

n  i=1

 log

 g n   i=1

g 

 πk f k (yi |θ k )

k=1



πk f k (yi |θ k ) .

k=1

To calculate the estimator of maximum likelihood ψˆ is common to use the logarithm of the function of likelihood, for the function and the logarithm of the function under certain conditions of regularity take in the same point their maximum. Therefore, the equation of likelihood must be solved,  g  n  ∂  log πk f k (yi |θ k ) = 0 ∂ψ i=1 k=1 Due to the presence of the logarithm of a sum, the resolution of the equation is difficult, for this reason it is required other type of procedure that allow the maximization of the log-likelihood function. The new procedure was introduced by Dempter [1] as a mechanism to manage absent information and consists in define a new hope or expectation that eases the maximization, in such a way that the parameters that maximize this expectation in each iteration converge to the parameters that maximize the likelihood function. Be y = (y1 , y2 , ..., yn ) an observed sample of size n, which will be denominated vector of incomplete data, corresponding to a realization of Y , with density function f (y|ψ) , where ψ is the parameter vector to estimate. Now, the variable Z = (Z 1 , Z 2 , ..., Z n ) is considered which will be denominated latent, that represents the unobserved data and whose realization is z = (z 1 , z 2 , ..., z n ). Then, the random vector X = (Y, Z ) receives the name of the vector of complete data and its realization is x1 = (y1 , z 1 ), x2 = (y2 , z 2 ), . . . , xn = (yn , z n ) in such a way that each realization yi always corresponds to a z i . In this context, it can be assumed that Z i represents a binary g–dimensional indicator variable whose element jth Z i j indicates the belonging of the observation yi to the component jth of the mixture where i = 1, 2 . . . , n and j = 1, 2, . . . , g. Thus, Z i j can be defined as, ⎧ 1 Ifyi − comes from the ⎪ ⎪ ⎨ component j − th. Z i j = zi j = ⎪ ⎪ ⎩ 0 in other case. Given the categorical nature of the Z i variable, when indicating the belonging of the sample points to a component or other of the mixture, it can be interpreted that

Grouping by Mixture of Normals for Breast Cancer in Two Groups …

69

the weights πk are the prior probability of yi observation belonging to populationk, distribution which suggests that Z i follows a multinomial of a single realization about

g categories with probabilities π = π1 , π2 , . . . , πg , that is to say, P(Z i = z i ) =

g  1 z i1 z i2 z ig π1 π2 . . . πg = πk zik z i1 , z i2 , . . . , z ig k=1

where g 

z ik = 1

k=1

g n  

z ik = n.

k=1 i=1

Then, P(z ik = 1) = πk k = 1, 2, .., g.

2.2 Gaussian Mixture Model Information about the absent data can be obtained by the observation of the elections that were realized. For which h(z|y, ψ) is defined as the density of absent data conditioned to the observed elections in the sample. Thus, by Bayes’ theorem, there is  P(z i k = 1) P(Y i = y i |z i k = 1)

h(z| y, ψ) = P(z i k = 1Y i = y i ) = P Y i = yi π k f k ( y i |θ k ) = g . k=1 π k f k ( y i |θ k ) Taking into account the above, here the new hope or expectation is defined in ψ that is related with the likelihood function that uses the conditioned distribution h(z|y,ψ) ┤. The EM procedure is iterative and begins with an initial value of parameters ψ^0 and in each iteration they are being updated. As it is known, the repeated maximization of this new function converges to the maximum of the same likelihood function. This new function is,      E(ψ ψ 0 ) = E l(ψ  y, z)Y = y, ψ 0 )  n g    0   0   E(ψ ψ ) = E z ik log πk f k (yi |θ k ) Y = y, ψ i=1 k=1

70

G. M. Guzmán et al.

=

g n      E z ik Yi = yi , ψ 0 [logπk + log f k (yi |θ k ) ] i=1 k=1

However:     E z ik Yi = yi , ψ 0 = P(z ik = 1Yi = yi , ψ 0 )  f k (Yi = y i |z ik = 1) P(z ik = 1)  =  0 P(Yi = yi ) ψ  πk f k (yi |θ k )  (0) = g  0 = τˆik |θ π f (y ) k k i k k=1 ψ Therefore, g n    τˆik(0) [logπk + log f k (yi |θ k ) ] E(ψ ψ 0 ) = i=1 k=1

=

g n  

τˆik(0)logπk

+

i=1 k=1

g n  

τˆik(0)log f k (yi |θ k ) .

i=1 k=1

After the previous calculation, the maximization of the function E is realized regarding ψ. This maximization is realized in two parts given that π_k only appears in the first adding and θk adds in the second. The maximization of the first begins with adding. For this case, Lagrange multipliers are used  g   n g  ∂   (0) τˆ logπk + λ πk − 1 =0 ∂πk i=1 k=1 ik k=1 n 

τˆik(0)

i=1 n 

1 +λ=0 πk

τˆik(0) = −λπk

i=1

Taking the sum about k in both sides of the last equality, n=

g n   i=1 k=1

Which implies that

τˆik(0) =

g  k=1

−λπk = −λ

Grouping by Mixture of Normals for Breast Cancer in Two Groups …

πˆ k(1) = πk =

71

n 1  (0) τˆ n i=1 ik

For the maximization of the second addition regarding θ k depending on the density function f k (yi |θ k ) that in this case are Gaussian densities, then  log f k (yi |θ k ) = logϕ(yi μk , σk 2 ) 2

1 yi − μk 1

2 = − log 2π σ − 2 2 σk 2 2

1 1 yi − μk 2 = − log(2π ) − logσ − 2 2 σk 2 Beginning by deriving regarding μ  2 

g n ∂   (0) 1 yi − μk 1 τˆ − log(2π ) − logσk − =0 ∂μk i=1 k=1 ik 2 2 σk 2 n 

2

τˆik(0)



i=1 n 

yi − μk 2σ k 2

τˆik(0) yi =

i=1

n 

=0

τˆik(0) μk

i=1

n

i=1 μˆ (1) k = n

τˆik(0) yi

(0) i=1 τˆik

To obtain the estimator of σk 2 there is, 

2  g n logσk 2 1 yi − μk ∂   (0) 1 − τˆ − log(2π ) − =0 ∂σk 2 i=1 k=1 ik 2 2 2 σk 2 −

2 n  1 (0) yi − μk + τ ˆ

2 = 0 ik 2σ k 2 2 σk 2 i=1 i=1

2 n n   (0) yi − μk τˆik = τˆik(0) 2 σ k i=1 i=1

n 

τˆik(0)

(0)

i=1 τˆik yi − n (0) i=1 τˆik

n σk = 2

μk

2

72

G. M. Guzmán et al.

Using the estimation of μk , an estimation of σk is obtained,

σˆ k(1)

   2  n (0) (1)  i=1 τˆik yi − μˆ k = n (0) i=1 τˆik

2.3 Initial Values The initial values on which the algorithm starts to iterate, and that are implemented in several situations, are taken by dividing the sample into g partitions and on each one of them is calculated the mean of the observations. These values are represented ˆ (0) ˆ (0) by μˆ (0) g . Regarding the weights, all are taken similarly according to 1 ,μ 2 ,...,μ (0) (0) π1 = π2 = · · · = πg (0) = 1/g. Other forms to take the initial values exist, for example, see [5].

2.4 Stop Criterion To stop the iterations, the relative difference is going to be considered   (t+1)   y) − l(ψ (t) |y)  l(ψ  (t)  . l(ψ |y)  That is used more by its dimensionlessness. And the process stops when the maximum value of the said difference is less than 10−6 .

3 Implementation of the Algorithm If the algorithm is able to discriminate by the radius_mean variable, if the tumor is malignant or o benign, a mixture of normals with two components must be considered, for which the database was divided into two equal groups previously sorted from lowest to highest, as the number of records is of n = 569, considering two groups, one with the first n 1 = 285 records and in second with the n 2 = 284 remaining records. The initial values that are determined based on the values of the variable radius_mean variable are; π1 (0) = π2 (0) = 1/2

Grouping by Mixture of Normals for Breast Cancer in Two Groups … Table 2 Estimators

Value of θˆk

θk π1

0.32

μ1

12.40

μ2

17.90

σ1

1.86

σ2

3.25

n1 1  = ys = 11.46σˆ 1(0) = n 1 s=1

μˆ (0) 2

n 1  = ys = 16.80σˆ 2(0) = n 2 s=n 1

0.68

π2

μˆ (0) 1

73





1  2 1  ys −μˆ 1 (0) n 1 − 1 s=1

n

1/2

n 2 1  ys −μˆ 2 (0) n 2 − 1 s=n

= 1.33 1/2 = 2.97

1

The algorithm begins to iterate and exactly stops in the iteration where it fulfills,   (t+1)  l(ψ  y) − l(ψ (t) |y)   (t)  < 10−6 . l(ψ |y)  Obtaining the following values for the estimators in Table 2 [9, 10]. With these values, there is the form of mixture of normals and with values of the last iteration, the components can be calculated, some data are shown in Table 3 [9, 10]. It can be seen that differences exist, for example, in the records 2, 7, 10, 15and16 the mixture of normals classifies in the component one (benign), however, in the diagnosis these records are classified in the component of malignant. In Fig. 4, the form of mixture of normals and its two components are shown; component 1 (benign) and component 2 (malign).

3.1 Calculation of the Standard Error through the Bootstrap Method The traditional approach of the statistical inference is based on idealized models and suppositions. Often, the expressions for precision measurements, as the standard error, are based on the asymptotic theory and are not available for small samples, see [8]. A modern alternative to the traditional approximation is the bootstrap method, presented by Efron (1979), see [11, 12]. The bootstrap is a method of intensive resampling, that is widely applicable and allows the treatment of more realistic models.

74

G. M. Guzmán et al.

Table 3 Mixture of normal with variables ID

Diagnosis Benign (B), Malignant (M)

radius_ mean

8,612,399

M

18.46

2

86,135,501

M

14.48

1

86,135,502

M

19.02

2

861,597

B

12.36

1

861,598

B

14.64

1

861,648

B

14.62

1

861,799

M

15.37

1

861,853

B

13.27

1

862,009

B

13.45

1

862,028

M

15.06

1

86,208

M

20.26

2

86,211

B

12.18

1

862,261

B

862,485

B

11.6

1

862,548

M

14.42

1

862,717

M

13.61

1

862,722

B

862,965

B

862,980

B

862,989

B

9.787

6.981 12.18 9.876 10.49

Mixture of Normals (1) Benign (2) Malignant

1

1 1 1 1

Fig. 4 Form of mixture of normals and its two components

Grouping by Mixture of Normals for Breast Cancer in Two Groups …

75

The bootstrap algorithm to estimate the standard errors of an estimator θ ^ = s(X) of parameter θ can be obtained through the following steps: 1. Of the Initial Sample X 1 , X 2 , . . . , X n , Generate B Samples of Independent Startups, X ∗(1) ∼ X 1∗(1) , . . . , X n∗(1) X ∗(2) ∼ X 1∗(2) , . . . , X n∗(2) ... X ∗(B) ∼ X 1∗(B) , . . . , X n∗(B) 2. Evaluate,

θˆ ∗(b) = s X ∗(b) ; b = 1, . . . , B 3. Estimate the Standard Error se(θˆ ) by the Standard Deviation of the Repetitions B,   se ˆ boot θˆ =



2 1   ∗(b) θˆ − θˆ ∗(.) B − 1 b=1 B

1/2

where θˆ ∗(.) =

B 1  ∗(b) θˆ B b=1

Using the bootstrap algorithm for 1000 resamples, the standard errors of each one of the parameters are obtained as shown in the next Table 4 [9, 10]. Table 4 Standard errors

θk

Value of θˆk

  se ˆ boot θˆ

π1

0.68

π2

0.32

0.05

μ1

12.40

0.15

μ2

17.90

0.68

σ1

1.86

0.10

σ2

3.25

0.45

0.05

76

G. M. Guzmán et al.

4 Conclusions It can be seen that, from a total of 569 observations, 357 that represent 62.7% of the total, indicate the absence of cancer cells, meanwhile that 212 that represent 37.3% of the total show the presence of cancer cells. With the model of mixture normals, it was found that, from the total of observations, 420 that represent the 73.8%, indicate an absence of cancer cells, meanwhile that 149 that represented 26.2% of the total, show the presence of cancer cells [13]. In the data found through the mixture of normals, it is seen that in the case of benign tumors there exists a coincidence of 349 cases that represent 97.8% of the total de benign cases, with a difference of 8 cases. Meanwhile in the malignant cases exist a coincidence of 141 cases that represent 66.5% of the total of the malignant cases, with a difference of 71 cases. It can be concluded that the model in the case of benign tumors has a prediction with good acceptance, meanwhile that in the case of malignant tumors, the prediction would not be very good. In this last case, it should be taken into account that in the study of Dr. Wolberg [14] is indicated that “The percentage is unusually large; the data conjunction does not represent in this case a typical distribution of medical analysis. In general, there will be a considerable amount of cases that represent negative versus a small amount of cases that represent positive tumors (malignant)”. This affirmation can corroborate the difference that was found in malignant cases. This kind of method can help and complement histopathological studies of benign mama tumors. It is more mathematically exact, resulting in a better prediction and having the best certainty for support and help pathologists for a histological diagnostic.

References 1. Dempster A, Laird N, Rubin D (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B (Methodological) 39(1):1–38 2. Denais C, Lammerding J (2014) Nuclear mechanics in cancer. Adv Exp Med Biol 773:435–470. https://doi.org/10.1007/978-1-4899-8032-8_20 3. Mengerse KL, Robert CP, Titterington DM (2011) Mixtures: estimation and applications. Wiley series in probability and mathematical statistics. Wiley, West Sussex 4. Schlattmann P (2009) Medical applications of finite mixture models. Springer, Heidelberg 5. McCulloch CE (1998) Review of “EM algorithm and extensions.” J Am Stat Assoc 93:403–404 6. McLachlan G, Peel D (2000) Finite mixture models. Wiley Series in Probability and Statistics. Wiley, New York 7. Levine R, Casella G (2001) Implementation of the monte carlo EM algorithm. J Comput Graph Stat 10:422–439 8. Jamshidian M, Jennrich RI (2000) Standard errors for EM estimation. J R Stat Soc Ser B (Statistical Methodology) 62(2):257–270 9. Wolberg WH, Street WN, Mangasarian OL (2019) Breast Cancer Wisconsin (Diagnostic) data set. In: Dua D, Graff C (eds) UCI machine learning repository. University of California, School of Information and Computer Science, Irvine, CA. [http://archive.ics.uci.edu/ml]

Grouping by Mixture of Normals for Breast Cancer in Two Groups …

77

10. Martínez Mancilla RI (2021) Aplicación del método bootstrap para la obtención del error estándar, en una mezcla de normales obtenida mediante maximización de la esperanza (EM). Benemérita Universidad Autónoma de Puebla. https://repositorioinstitucional.buap.mx/bitstr eam/handle/20.500.12371/12661/20210319123653-3609-TL.pdf?sequence=2 11. Efron B (1979) Bootstrap methods: another look at the jackknife. Ann Stat 7:1–26 12. Efron B, Tibshirani RJ (1993) An introduction to the bootstrap. Chapman & Hall, New York 13. Finch SJ, Mendel NR, Thode HC Jr (1989) Probabilistic measures of adequacy of a numerical search for a global maximum. J Am Stat Assoc 84(408):1020–1023. https://doi.org/10.1080/ 01621459.1989.10478867 14. Hanahan D, Weinberg RA (2011) Hallmarks of cancer: the next generation. Cell 144(5):646– 674. https://doi.org/10.1016/j.cell.2011.02.013 15. https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic

A Smart Automation System for Controlling Environmental Parameters of Poultry Farms to Increase Poultry Production Md. Kaimujjaman , Md. Mahabub Hossain , and Mst. Afroza Khatun

Abstract Agriculture and poultry must be addressed as the backbone of the economic growth of any developing country like Bangladesh. Furthermore, agricultural progress and economic prosperity are inextricably linked. Technology advancements and new technical developments have ushered in a new era of real-time animal health monitoring. This study focuses on a sensor-based solution for minimal-cost, capital-saving, value-oriented, and productive chicken farm management in order to increase the value of the broiler farm economy index (BFEI). The goal of this research was to see if an Intelligent System based on an Embedded Framework could be utilized to monitor chicken farms and adjust environmental conditions using smart devices and technology. This study also looked into how different temperatures (ranges from 25 to 33 °C) affected broiler performance efficiency factor (BPEF), livability, and feed efficiency (FCR). It was discovered that the group reared at higher temperatures had greater broiler performance efficiency factor, livability, and lower feed efficiency. Keywords Intelligent farm · Embedded framework · Broiler farm economy index (BFEI)

Md. Kaimujjaman · Md. M. Hossain (B) Department of Electronics and Communication Engineering, Hajee Mohammad Danesh Science and Technology University, Dinajpur 5200, Bangladesh e-mail: [email protected] Mst. A. Khatun Department of Dairy and Poultry Science, Hajee Mohammad Danesh Science and Technology University, Dinajpur 5200, Bangladesh © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_6

79

80

Md. Kaimujjaman et al.

1 Introduction Chickens are the most popular poultry species farmed in big quantities. To provide food, including meat and eggs, over 0.05 trillion chickens are produced every year. Poultry contributes greatly to poverty reduction and improved food security in developing nations as a source of high-quality protein. For many poor and middle-class families, chicken products constitute their sole affordable source of animal nutrition. Under five-year-old who are sick and undernourished, eggs provide a convenient source of high-quality protein [1]. Chickens are influenced by high ambient temperatures in poultry farming, especially when there is high relative humidity and a sluggish air speed above the birds [2, 3]. Severe heat stress reduces production efficiency and raises flock death rates. It reduces growth rates, egg output, meat quality, and egg quality, such as smaller eggs and thinner shells, all of which can lower their commercial value, resulting in considerable yield and economic losses [4]. Climate anomalies, such as unexpected temperature and humidity increases, have become more common in recent years. It has harmed not just the proper growth of chickens, but also the quality of their meat and the size of their eggs [5]. Therefore, the development of alternative techniques is getting attention for the remedy of poultry owner’s anxiety over higher mortality rates in heat stressed conditions in summer season. Incorporation of intelligent sensor technology in the poultry sheds for continuous monitoring of temperature and relative humidity and turn-on the automated ventilation system at threshold conditions would be a promising solution for the reduction of heat stress of the poultry. In this project study, it has been suggested that an intelligent system can be designed and implemented that can track and manipulate environmental factors like temperature and humidity. The sensors carry out the monitoring, and the data is kept in a digital storage unit. The automated system compares these data against threshold values suitable for broiler chicks’ growing conditions. The system uses relay switches that autonomously alter the environment factors if it is unfavorable for the poultry. The paper is sectioned into different parts as follows: Sect. 1, discusses about the general ideas and importance of the study. Section 2 describes relevant works and best environmental factors. The proposed design principles, block diagram of working process and system working flowcharts are discussed in Sect. 3. Finally, the Sects. 4 and 5 depict the outcomes and conclusive comments on the performance of the system.

A Smart Automation System for Controlling Environmental Parameters …

81

2 Related Work 2.1 Environmental States in Poultry Farms Poultry growers from temperate climates countries such as Bangladesh face the difficulty of high ambient temperatures during summer which has a major impact on the production performance of commercial poultry. If a balance between body heat loss and body heat production are not achieved birds are ‘heat stressed’. It is associated with all types of poultry at all ages. As a result, an intelligent system can offer an environment that is appropriately regulated and controlled to avoid limiting bird performance. The most fundamental technique of regulating the poultry environment, according to Mutai et al. [6], is to maintain optimal temperatures in these facilities by altering circulation and warming rates. A relative humidity (RH) of more than 70% is unfavorable and should be avoided by using ventilation in buildings [7]. RH levels are below 50% because more dust and airborne microorganisms are produced, but this is not a regular occurrence. High RH paired with high temperatures might cause discomfort in birds during the summer months [8]. Birds eat more feed to maintain a normal body temperature when exposed to cooler temperatures. Feed is not turned into the meat when used for heating [9]. When temperatures rise too high, energy is squandered as birds struggle to cool down. In these circumstances, ascites (a metabolic illness that causes performance decline) and mortality in broilers will be more common. According to a recent study, when two separate temperature ranges (26 and 32 °C) were applied to different groups of broilers during growth, the group exposed to the higher temperature performed better and used less feed [10]. The conventional systems for controlling chicken farms have a number of drawbacks, including low energy efficiency and high-power usage. In today’s poultry farming, smart control technologies like Zigbee, Arduino, Raspberry Pi, and the integration of wireless sensors, General Packet Radio Service (GPRS) and Multiple Input Multiple Output (MIMO) systems have been adopted [11–15].

2.2 Best Selection of Environmental Parameters for Broilers After surveying different reports on broiler chicken or bird farming, selection of threshold conditions for poultry farm environment parameters are made below in Table 1 [16]. It has been seen that during the brooding period of broiler chickens (age < 7 days) 30–33 °C temperature is required for best production. Due to extreme temperatures, broilers older than 21 days may experience heat stress. As a result, the temperature of poultry farm decreases by 2/3 °C every week as the chickens grow older. If broiler fecal matter is not changed every two weeks, relative humidity (RH) in the poultry environment might reach 70%, which is not ideal for the best farming.

82 Table 1 Parameter selection

Md. Kaimujjaman et al.

Age (Days)

Temperature ( °C)

0–6

30–33

Humidity (%) 50–60

7–14

28–31

50–65

15–21

26–29

60–75

>21

25–28

65–80

3 System Design and Descriptions Microcontroller (MC) is the heart of our proposed system as shown in Fig. 1. The MC collects data from the temperature and humidity sensors (DHT22), stores data through Micro-SD Card module, analyzes the data and on/off relay switches according to TH sensor values. The MC also keeps track of time through Real-Time Clock Module (DS3231) and displays the system actions through Liquid Crystal Display (16*2 LCD) to provide interaction between human and machine world.

3.1 IDE and Microcontroller IDE Stands for Integrated Development Environment which is a software environment that provides programming facility to microcontroller boards with C or C++ programming languages. The majority of Internet of Things related works rely on

Fig. 1 Block diagram of proposed system

A Smart Automation System for Controlling Environmental Parameters …

83

microcontroller technology [17, 18]. In this project, we used Arduino Mega development board (country origin: China) based on ATmega2560. The features of this development board are given below: • ATmega2560-The high Performance, low power AVR® 8-bit microcontroller. • EEPROM-The ATmega2560 features 4 kb (4096 bytes) of EEPROM, a memory which is not erased when powered off. • Digital and Analog Pins-The Mega 2560 has 54 digital pins, whereas 15 supports PWM (Pulse Width Modulation), and 16 analog input pins, the most of any Arduino board. • Serial Ports-Connect to several devices through the 4 × hardware serial ports (UARTs).

3.2 Sensors and Peripherals The TH is a digital output temperature and relative humidity sensor. It measures temperature with a thermistor and humidity using a capacitive humidity sensor from the surrounding air. A total of seven TH sensors are installed in the system. There might be several ways for storing data but SD cards and micro–SD Cards are one of the most popular and easiest way to store data. The SD card modules in the data storage unit allows us to read from and write on to it. The real-time clock keeps track of current time and date. It is integrated into every electronic device nowadays, which needs to keep track of time. It contains a lithium battery of 3.3 V. For the first time use of this module, it requires to set time and date. Once it is set, if the module sustains from power, it keeps track of time with the help of an onboard battery. A real-time clock (RTC) module can provide exact time and date to the micro-controllers.

3.3 Hardware and Software Setup The microcontroller is used to interface all peripheral modules, such as sensor units, data storage units, and control units. Temperature and relative humidity were measured using TH sensors in the sensing unit. One TH sensor, 2–3 heat bulbs, and one exhaust fan were precisely placed in each broiler cell. The data storage device gathers information from sensors and saves it on a micro-SD card through an SD card module. Relays, analog switches, heat bulbs, cooling fans, exhaust fans, and, most significantly, the microcontroller make up the control unit. This completes the hardware configuration.

84

Md. Kaimujjaman et al.

By uploading the software program to MC’s CPU, the software setup is achieved. The program’s flowchart is depicted in Sect. 5. Following the upload, the microcontroller collects the values of TH sensors at regular intervals, compares them to environmental factors that are required based on the age of broilers, and takes action via relay switches to ensure a healthy atmosphere for the production farm.

3.4 Data Acquisition Two complete broiler sheds are constructed, one in the summer and the other in the winter. The size of each shed was 4 × 4 sq.ft. For commercial broiler production, each shed requires 28–35 days. Cells encased the sensors, which were situated at chicken level (approximately 0.4–0.6 m above ground level). TH sensors, an analog temperature meter, and a hygrometer were used to detect temperature and relative humidity. For all TH sensors, data was collected at 5-min intervals. The average body weight and amount of feed consumed each week are also included in the data collection. Data was downloaded from the micro-SD card to the origin program through a USB connection for analysis. The flowchart, as shown in Fig. 2, represents the main functionality of the program in microcontroller. The system reads environment temperature and relative humidity from TH sensors and stores them into micro-SD card by microcontroller. The system also checks the threshold values of different parameters and takes action by switching heat bulbs, cooling and exhaust fans ON/OFF at regular intervals.

4 Result and Discussion 4.1 Proposed Smart System Performance Versus Conventional System The conventional system has one TH sensor (DHT_7) while the smart system has six TH sensors (DHT_1 to DHT_6). The microcontroller logs the temperatures and humidity at a constant interval of 5-min, and the data is stored on an SD card. Then data is downloaded into data analysis software to create temperature and humidity graphs, which are then used to analyze the performance of specified smart system. The most crucial period for chicken farming is the first week, which lasts up to 7 days for broiler chicks and this period is often referred to as the brooding phase. The majority of chicken deaths occur at this stage as a result of unsuitable conditions. It has been observed from Fig. 3, the controlled shed temperature under the designed smart system lies between 30 and 33.5 °C whereas the temperature of the conventionally controlled shed fluctuates between 28.6 and 34.6 °C. From (Fig. 4) Relative humidity in conventional system fluctuates between 62 and 68%

A Smart Automation System for Controlling Environmental Parameters …

85

Fig. 2 Smart system working flowchart

and Relative humidity in smart controlled systems (cell 1, 2 and 6) lies between 69 and 83% that is permissible. Although a higher relative humidity is observed from cells 3, 4 and 5 in smart system. Rising of the RH above 90% at cell no. 4, 5, and 6 is not an indication of humidity rather than an indication of high air pollutants NH3 level [19, 20]. In the 2nd week, at the age of (8–14) days broiler chickens, environmental temperature requires modification. For that reason, upper and lower thresholds of temperature is decreased by 2 °C. From Fig. 5, it is seen that smart system temperature lies between 28 °C and 31 °C. On this day environment temperature is maximum which is near 33 °C at mid of the day. From Fig. 6 RH is between 70–80% in conventional subsystem and 80–90% in smart subsystem. In week-3, up to the age of 21 days broiler chickens, conditions for upper and lower thresholds of temperature are again decreased by 2 °C. From Fig. 7, it is seen that temperature lies between 27.4 and 28.6 °C on day 9 whereas environment temperature fluctuates between 29.8 and 34 °C. From Fig. 8 RH is between 63 and 73% in environment and 80–90% in smart subsystem. It has been observed from all the observation weeks that temperature in cells could be controlled by the

86

Md. Kaimujjaman et al.

Fig. 3 Temperatures in shed’s cells in a day in the 1st Week

Fig. 4 Humidity in shed’s cells in a day in the 1st Week

proposed smart system whereas the relative humidity could be controlled somewhat small degree. The significant controlling of the RH controlling can be achieved by regular changing of fecal matter from broiler sheds every two weeks. Thus, we can conclude smart system doing its job properly by controlling optimum environmental parameters for chicken farming.

A Smart Automation System for Controlling Environmental Parameters …

87

Fig. 5 Temperatures in shed’s cells in a day in the 2nd Week

Fig. 6 Humidity in shed’s cells in a day in the 2nd Week

4.2 Measurement of the Performance Efficiency of Broilers in Both Systems Livability Livability = (Numbers of broilers sold × 100)/Number of broilers at the beginning (1)

The average life expectancy of a modern broiler is nearly seven weeks. Chicken flocks have a mortality rate of nearly 5–6% on average. It can rise to 20% if broilers are not properly maintained. A higher mortality rate is a major hindrance to commercial broiler chicken expansion. During broiler production in smart and conventional systems, it has been observed that livability rate is 93.3% in smart system and 83.3%

88

Md. Kaimujjaman et al.

Fig. 7 Temperatures in shed’s cells in a day in the 3rd Week

Fig. 8 Humidity in shed’s cells in a day in the 3rd Week

in conventional system. Figure 9 indicates that the smart system has a 10% higher livability rate than conventional system. Alternatively, we can say smart system has lower mortality rate than the conventional system.

A Smart Automation System for Controlling Environmental Parameters …

89

Fig. 9 Livability versus mortality pie diagram for broilers in smart and conventional systems

Feed Efficiency or Feed Conversion ratio FCR = Total quantity of feed consumed per broiler (kg)/Mean body weight gain (kg)

(2) For poultry farming the conventional FCR ranges from 1.7 to 2.0. To investigate how smart system (SS) affects the FCR, five different types of feed were given to the broilers namely (Nourish, Sun, A1, Aman and ACI). These feeds were also given to the broilers that are grown in conventional system (NS). From Fig. 10, FCR values in smart system (SS) ranges from 1.22 to 1.52 for different feeds whereas FCR in conventional system (NS) ranges from 1.29 to 1.92, which is an indication of better FCR in smart system. This can reduce the overall cost of poultry production. As in poultry feed cost is around 70% from overall cost. Thus, smart system is becoming more environmentally friendly and sustainable because of the outstanding feed efficiency capabilities of today’s mercantile broiler chicken traits. Fig. 10 Feed conversion ratio bar diagram for broilers

90

Md. Kaimujjaman et al.

Fig. 11 Broiler performance efficiency factor bar diagram in smart and conventional systems

Broiler Performance Efficiency Factor A practical and exhaustive indicator to evaluate the performance of broilers is the broiler performance efficiency factor (BPEF). It has a direct relationship with the FCR. The lower the FCR value, the higher the BPEF. The BPEF value should be at least 100. The BPEF will be better if the value is higher. BPEF = (Live weight (kg) × 100)/FCR

(3)

From Fig. 11. It is evident that broilers maintained in smart system (SS) have a performance factor of over 130 in all cells, which is above the minimum value of BPEF. We observe from cell-T4R2 and cell-T5R2 have BPEF value less than 100. These cells were enclosed in conventional system (NS). Therefore, broilers maintained in a conventional system have failed to achieve the desired broiler performance efficiency factor. Broiler Farm Economy Index (BFEI) BFEI = (Average live weight (kg) × livability %)/( FCR × growing period (days))

(4) A BFEI score of 2.0 or higher suggests superior farm management and optimal broiler performance, whereas a reading of less than 1.3 denotes inadequate performance. From Table 2, the mean broiler farm economy index for smart system is 3.89 and the mean value of BFEI in conventional system is 2.52, which is a clear indication of better broiler farm management in smart automated system.

A Smart Automation System for Controlling Environmental Parameters …

91

Table 2 Broiler farm economy index calculation Smart system

Conventional system

Shed no

BFEI

Mean BFEI

Shed no

BFEI

Mean BFEI

T1 R 1

4.08

3.89

T1 R 2

3.52

2.52

T2 R 1

3.52

T2 R 2

2.54

T3 R 1

3.72

T3 R 2

2.50

T4 R 1

4.64

T4 R 2

2.33

T5 R 1

3.50

T5 R 2

1.72

5 Conclusions Temperatures, humidity, light intensities, and ammonia (NH3 ) levels in poultry sheds have been found to be the most essential elements in poultry production. The goal of this project is to create a smart system for controlling environmental elements that affect chicken or broiler farming. The system maintained minimum and maximum threshold conditions based on the age of broiler chickens in order to offer a favorable environment for broiler chickens during their growing period. The temperature threshold is between 25 and 33 °C, while the relative humidity is between 50 and 85%. Ventilators were activated when the temperature was raised over the maximum value, and heat bulbs or heaters were activated when the temperature fell below the minimum value. Similarly, Exhaust fans are turned on and off accordingly to the RH fluctuation between maximum and minimum levels. The health of the poultry is very important for keeping a good farm production and minimizing the total power required by the smart control system in order to reduce the overall production cost of poultry farms. Poultry farming with modern technologies and embedded systems, where climate parameters are continuously monitored and controlled by the system, can provide a lower mortality rate, a lower feed efficiency, a higher efficiency factor, and a higher farm economy index than traditional poultry farming management.

References 1. Scanes CG (2007) Contribution of poultry to quality of life and economic development in the developing world. Poult Sci 86(11):2289–2290 2. Emery J (2004) Heat stress in poultry solving the problem. Defra Publications, Department for Environment, Food and Rural Affairs London, UK (2004) 3. Corkery G, Ward S, Kenny C, Hemmingway P (2013) Monitoring environmental parameters in poultry production facilities. In: Computer aided process engineering forum. Institute for Process and Particle Engineering, Graz University of Technology, Austria 4. Rimoldi S, Lasagna E, Sarti FM, Marelli SP, Cozzi MC, Bernardini G, Terova G (2015) Expression profile of six stress-related genes and productive performances of fast and slow growing broiler strains reared under heat stress conditions. Meta Gene 6:17–25 5. Oguntunji AO, Alabi OM (2010) Influence of high environmental temperature on egg production and shell quality: a review. Worlds Poult Sci J 66(4):739–750

92

Md. Kaimujjaman et al.

6. Mutai EBK, Otieno PO, Gitau AN, Mbuge DO, Mutuli DA (2011) Simulation of the microclimate in poultry structures in Kenya. Res J Appl Sci Eng Technol 3(7):579–588 7. British Standards Institution (1990) BS 5502: Part 43: Code of practise for design and construction of poultry buildings, London, UK 8. Meluzzi A, Sirri F (2009) Welfare of broiler chickens. Ital J Anima Sci 8(1):161–173 9. University of Kentucky: factors affecting broiler performance (2010) 10. Fairchild BD (2009) Environmental factors to control when brooding chicks. Technical bulletin 1287, cooperative extension service. University of Georgia, GA 11. Mahale RB, Sonavane SS (2016) Smart poultry farm monitoring using IOT and wireless sensor networks. Int J Adv Res Comput Sci 7(3) 12. Ahmadi MR, Hussien NA, Smaisim GF, Falai NM (2018) A survey of smart control system for poultry farm techniques. Int J Distrib Comput High Perform Comput 13. Lahlouh I, Rerhrhaye F, Elakkary A, Sefiani N (2020) Experimental implementation of a new multi input multi output fuzzy-PID controller in a poultry house system. Heliyon 6(8):e04645 14. Alimuddin KBS, Subrata BS, Sumiati NN (2011) A supervisory control system for temperature and humidity in a closed house model for broilers. Int J Electr Comput Sci 11:75–82 15. Paputungan V, Faruq A, Puspasari F, Hakim F, Fahrurrozi I, Oktiawati UY, Mutakhiroh I (2020) Temperature and humidity monitoring system in broiler poultry farm. In: IOP conferences on series: materials science and engineering, vol 803, p 012010 16. Wariston FP, Fonsecac LS, Fernando BCG, Luciana (2020) Environmental monitoring in a poultry farm using an instrument developed with the internet of things concept. Comput Electron Agric 170:105257 17. Kaimujjaman M, Islam MM, Mahmud R, Sultana M, Mahabub M (2020) Digital FM transceiver design and construction using microcontroller and I 2 C interfacing techniques. Int J Recent Technol Eng 8(5) 18. Rahman MF, Md Hossain MM (2021) Implementation of robotics to design a sniffer dog for the application of metal detection. In: Proceedings of international conference on trends in computational and cognitive engineering. Springer, Singapore 19. Al-Chalabi D (2015) Cooling poultry houses basic principles of humidity and temperature. University of Baghdad 20. Vaiˇcionis G, Kiškien˙e A, Ribikauskas V, Skurdenien˙e I (2005) Environmental problems in laying hens poultry houses. In: The second international scientific conference “Rural Development”

A New Model Evaluation Framework for Tamil Handwritten Character Recognition B. R. Kavitha , Noushath Shaffi , Mufti Mahmud , Faizal Hajamohideen , and Priyalakshmi Narayanan

Abstract The robustness of any pattern recognition model relies heavily on the availability of comprehensive samples. Until last year, the Tamil Handwritten Character Recognition (HWCR) works relied on the solitary HPL Tamil dataset [1]. Recently, a new benchmarking for Tamil HWCR was published [2] comprising 94000 samples in total. The efficiency of corroboration using multiple standardized databases is crucial in advancing any research area. Towards this aim, in this paper, we showed different ways of experimentation with these datasets. For this purpose, we utilized transfer learning, and a custom deep neural network, a recently published work for Tamil HWCR [3]. Different experimental setups were suggested that involved independent, cross-testing, and mixed modes of model building and evaluation using two standardized datasets. These setups form a rigorous testing framework for analyzing Tamil HWCR tasks. The work presented in this paper is the first to report the results of Tamil HWCR using two standardized datasets and sets a new model evaluation benchmark. For rapid reproducibility and dissemination, the code and materials used in this study are available at https://github.com/Kavitha-BR-VIT/Tamil-HWCR. B. R. Kavitha (B) School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India e-mail: [email protected] N. Shaffi · F. Hajamohideen Department of Information Technology, University of Technology and Applied Sciences-Sohar, Sultanate of Oman, OM 311 Sohar, Oman e-mail: [email protected] F. Hajamohideen e-mail: [email protected] M. Mahmud Department of Computer Science, Nottingham Trent University, Nottingham NG11 8NS, UK e-mail: [email protected] CIRC and MTIF, Nottingham Trent University, Nottingham NG11 8NS, UK P. Narayanan Department of Computer Science, University College London, London, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_7

93

94

B. R. Kavitha et al.

Keywords Convolution neural network · Handwritten tamil datasets · OCR · Pretrained models · Tamil handwritten character recognition · Explainable OCR

1 Introduction Today, computing devices such as smartphones, tablets, and computers significantly influence day-to-day human activities and play an essential role in modern-era communication. Even with the advent of groundbreaking communication technologies, handwriting remains the widely preferred communication and information preservation method. Hence, effective ways of deciphering the information exchange between a user and various computing devices remain essential research areas known as handwritten character recognition (HWCR) [4]. There are two forms of HWCR: offline and online [4]. The offline HWCR focuses on recognizing the character from digitized records of any physically written document. The online HWCR pertains to the recognition of characters directly from digitally written text, such as while using a touch-based smartphone or tablet device, digitizer, or even in a PC that has a touch interface [5]. This paper focuses on offline HWCR and, more specifically, using the Tamil script [6]. Despite the fact that the research in Tamil OCR has been conducted for several decades [7], the efficacy of the algorithms proposed has yet to reach human-level performance or even close to it. This can be attributed mainly to varying character appearance due to overlapping character patterns, complexity in character formation, and multifarious writing styles (varied dots patterns, excessive stroke elongation, etc.) even within the writing of a single user, etc. These difficulties continue to pose new challenges to the ever-evolving field of technology. Since the Tamil language is one of the official languages spoken by vast people from the Indian subcontinent, several applications also instigate the research in this field. In addition to being one of the official languages of India, the Tamil language is the only Indian language to be used for bureaucratic communications outside of India in countries such as Srilanka and Singapore. The availability of a robust Tamil OCR system can be a game-changer in the automation and processing of official documents. Hence, it is safe to say that challenges and eminent applications keep Tamil OCR as one of the hot and active research areas. There are currently two standardized databases available for corroborating the effectiveness of Tamil HWCR: i) the hpl-tamil-iso-char dataset [1] and ii) the uTHCD dataset [2]. The HP Labs, India, had developed the hpl-tamil-iso-char dataset (for brevity, we refer to this dataset as the HPL dataset henceforth). The HPL database comprises isolated characters belonging to 156 character classes with a varying number of samples in each class. The database has train and test repositories containing respectively 50691 and 26926 samples. Thus, 77617 samples in total from all classes. The uTHCD repository [2] contains 90950 samples belonging to 156 classes. This collection has samples collected from both offline mode and online modes. The col-

A New Model Evaluation Framework for Tamil Handwritten Character Recognition

95

lected samples are made available in different train–test split proportions such as 70–30 (uTHCD_a), 80–20(uTHCD_b), 85–15(uTHCD_c), and 90–10(uTHCD_d). The main motivation for carrying out this work are outlined below: 1. Most of the Tamil HWCR works have used to the sole availability of the HPL dataset. This work would pave the way for researchers to adopt the new way of testing using multiple standardized datasets leading to more objective analysis of their proposed algorithms. 2. Recently published uTHCD database is a collection of samples generated from offline and online modes. In contrast, the HPL dataset is a pure collection of offline samples interpolated through the online counterparts. Each of these datasets exhibits several real-time scenarios. Hence, testing algorithms under a wide range of real-time variations can help develop deployable algorithms. Several Tamil OCR works in the literature use ad-hoc databases, which were prepared under constrained environments and are publicly unavailable. This will only hinder the benchmarking of the results. The objective of this paper is to serve as a future reference for efficacy comparisons of Tamil OCR works using standard datasets. Efficacy corroboration using multiple standardized datasets will discover unforeseen corner cases. These corner cases will emerge when multifaceted datasets are considered in the experimentation. Towards this aim, in this paper, we consider one of the effective deep neural networks (DNN) proposed very recently by Kavitha et al. [3] for this purpose and carry out a detailed comparative study using the HPL and uTHCD datasets. The remainder of this paper is organized as follows: Sect. 2 reviews the literature of Tamil HWCR reported in the last 10 years. Section 3 outlines the DNNs used for comparing the datasets. Section 4 presents the experimental setup, and subsequent results are presented along with the necessary analysis in Sect. 5. Finally, conclusions are drawn in Sect. 6.

2 Review of Recent Literature This section presents a brief review of literature focusing on works published in the last decade that predominantly used deep learning algorithms for Tamil HWCR purposes. Kowsalya et al. [8] proposed a method where Elephant herding optimization was used to use weights effectively, and the artificial neural network was used for this study and to visualize the performance of handwritten documents. The input image is fed to the Gaussian filter, Binarization process, and skew detection technique for the preprocessing step. Also, the proposed method has four main functions applied to the dataset developed in-house: preprocessing, segmentation, feature extraction, and recognition. The performance of the proposed method is assessed with the help of the

96

B. R. Kavitha et al.

metrics, namely Sensitivity, Specificity, and Accuracy. This method obtained a recognition rate of 93%. In another work, [9], the feed-forward back propagation neural network as the classifier was employed that used two-dimensional discrete wavelet transformations as features. The objective was to study multiresolution analysis of characters through the usage of wavelet transform at different frequency bands. This method was tested on a dataset built in-house. Vinotheni et al. [10] utilized CNN for recognizing Tamil handwritten characters. To achieve a faster convergence rate and get the highest recognition accuracy, this research used a modified convolution neural network (M-CNN). M-CNN is applied to isolated Tamil handwritten character sets. The authors developed yet another dataset consisting of 54600 samples that are not available publicly and reported to have obtained an accuracy of 97.07%. The proportion of train–test split used in this study also needs to be clarified. Vijayaraghavan et al. [11] presented the first proper application of ConvNets to recognize Tamil Characters. The CNN was applied to the Tamil handwritten characters dataset to classify 35 classes. Further, this research augmented the learning features with the ConvNetJS library, stochastic pooling, probabilistic weighted pooling, and local contrast normalization. The experimental results showed promising 94.4% accuracy on the HPL dataset. However, this study considered a subset of the 35 character classes from the 156 Tamil alphabet set. In [12], a simple ConvNet model consisting of two convolutional and fully connected layers was proposed to recognize 247 Tamil characters using 124 unique symbols. The Dropout regularization method avoided over-fitting the network to the training data. This work used the HPL dataset with a train–test–validation split proportion of 69%–11%–20%, respectively. The network achieved a train and test accuracy of 88.2% and 71.1%, respectively. This paper emphasized the existence of high inter-class similarity among Tamil characters that led to misclassifications. Kaliappan et al. [13] used a hybrid classification approach involving weighted feature point-based and CNN methods to improve the classification accuracy of Tamil vowels. This hybrid approach considered the HPL dataset for experimentation. However, the investigation focused only on recognizing the 12 vowels of the Tamil character set. Another recent study [14] presented the work on identifying characters from the Tamil palm leaf manuscripts using CNN. Character database was generated by scanning Palm-leaf manuscripts. The database consists of 67 character classes with approximately 100 samples in each class. The model resulted in an accuracy of 87.33%. In [15], authors extracted the features by transforming the character image pixels to eigenspace by applying Principal Component Analysis (PCA). The extracted eigen feature map was fed to the CNN instead of raw character pixels. The HPL dataset was considered in the experimentation, and the model resulted in an incremental accuracy over the model sans PCA component. From this review of recent literature, it is evident that only a handful of methods have used standardized dataset for their experimentation, while others have used ad hoc datasets that impedes reproducibility for benchmarking purposes. We resolve this gap in the literature through the below essential contributions:

A New Model Evaluation Framework for Tamil Handwritten Character Recognition

97

1. The state-of-the-art CNN architecture for Tamil OCR [3] has been considered in this work to demonstrate the benchmarking of the standardized dataset. 2. In addition, the popular transfer learning model has been considered to show efficacy results, as most of the recent deep learning models unanimously use pre-trained models for establishing baseline accuracy. 3. In addition to the HPL dataset, the recently published Tamil dataset—uTHCD [2] has been considered in this work. 4. We have also suggested new ways of experimentation, making full use of the two standardized datasets. To the best of our knowledge, this is the first paper in the domain of Tamil HWCR research that reports results based on two standardized datasets.

3 Proposed Methodology 3.1 A Nine-Layered Convolutional Neural Network for Tamil HWCR In this work, we leverage the DNN model developed by Kavitha et al. [3] to compare the performance of recognizing handwritten Tamil characters on two different datasets. The model was constructed with five convolutional layers with max-pooling layers integrated after two consecutive convolutional layers followed by two fully connected layers. The building blocks of this model are depicted in Fig. 1. The first two convolutional layers are composed of 16 feature maps, while the next two are composed of 32 feature maps and the last one with 64 feature maps, each with 3 × 3 filter size.

Fig. 1 The Nine-Layered CNN for the recognition of tamil HWCR [3]

98 Table 1 Hyperparameter values used in our study

B. R. Kavitha et al. Sl.No

Hyper-parameters

Value

1 2 3 4 5 6

Optimizer Learning rate Initialization Loss function Regularization Batch size

Adam 0.001 Xavier Cross entropy Dropout 64

The pooling layers used 2 × 2 filter with a stride value of 2. The two fully connected layers are composed of 500 neurons and 200 neurons each. The model used ReLU as activation unit for all the convolutional layers and Adam as the optimizer with a learning rate of 0.001. To avoid overfitting, dropout generalization technique was used. The model hyper-parameter values are specified in Table 1. This customized CNN model was trained and tested on the HPLabs dataset, consisting of 78000 samples of 156 classes, with 500 samples for each class [3].1 Before training the model with CNN, images were preprocessed, including normalization and scaling. The model was trained for 100 epochs which reported a training accuracy of 95.16% and testing accuracy of 97.7% which is by far the highest accuracy achieved by any DNN-based algorithm on the HPL dataset.

3.2 Transfer Learning: The VGG16 Model Establishing a benchmark using a CNN would be inadequate without the consideration of the transfer learning approach. Transfer learning is a process of reusing a model for a purpose different than it was trained initially [16]. The VGG16 model [17] is one of the popular models used in the transfer learning process for its stupendous success in solving many computer vision problems [18]. Motivated by this, we have used this architecture in our proposed work to classify Tamil characters. The architecture of the VGG16 model is as shown in Fig. 2. The top layer constituting three dense layers will be replaced by two fine-tuning layers of 512 neurons, followed by the final classification layer with 156 neurons. Input to this model is an image of size 64 × 64 × 3.

1

For brevity, we refer to this CNN model as custom CNN model in the rest of the paper.

A New Model Evaluation Framework for Tamil Handwritten Character Recognition

99

Fig. 2 The VGG16 Architecture

4 Experimentation 4.1 The Tamil Benchmark Datasets HP Labs India had developed handwritten character datasets for Indic scripts such as Tamil, Telugu, and Devanagari. The isolated Handwritten Tamil Character Dataset is our subject of study, a subset of which was first used in IWFHR 2006 for Tamil Handwritten Character Recognition Competition [1]. The HPL dataset was developed with the aid of HP Tablet PCs. The Tamil native writers from various parts of Tamil Nadu, India, were made to write the 156 unique Tamil characters and the dataset is in standard UNIPEN format. An offline version of the dataset was derived from the online dataset by applying piecewise linear interpolation with a uniform thickening factor [1]. The images are in the form of bi-level TIFF images. While most of the classes have nearly 500 samples in each class, some have as low as 271. The dataset is available for download as training and test images for all 156 unique characters. This dataset was the only dataset available for Tamil HWCR since 2006. The drawback of this dataset is that some character classes have fewer samples, as low as 271, which results in causing an imbalance in the dataset [2]. In addition, as samples are generated through piecewise interpolation of online stroke coordinate points, the samples of this database do not necessarily represent actual offline samples found in physical documents such as forms, handwritten bills, legal documents, etc. This drawback was addressed recently by Noushath et al. [2], who published another repository known as the uTHCD dataset.

100

B. R. Kavitha et al.

Table 2 Comparison of HPLabs Dataset and uTHCD dataset HPLabs dataset Total number of samples Data capture method No of classes Per class samples Available format Publicly available in HDF format Uniform distribution of samples in each class Processed database available for download

4.1.1

uTHCD dataset

78000 Online 156 ∼500 ( for some classes it is as low as 271) TIFF ✘

90950 Online and offline 156 ∼600









Raw (bmp) and HDF5 ✔

The uTHCD Dataset

The uTHCD dataset is the recent dataset of handwritten Tamil characters comprising 91000 samples of 156 classes collected from 850 native Tamil writers [2]. This dataset is publicly available in a convenient HDF5 format on the Kaggle website [2]. The samples were collected in two modes: offline and online. In offline mode, samples were collected after scanning the physical form collection documents after following necessary preprocessing steps such as skew correction, noise removal, etc.[2]. In online mode, the same procedure was followed except that the form collection medium was a digital form through digital pens. Samples collected from both online and offline modes are segregated so that each character class contains samples from both modes. Unlike the HPL database, this balanced dataset contains approximately 600 samples in each class. Train and test folders consist of both forms of data collection aforementioned. Table 2 summarizes the salient features of both repositories. In all our experimentation that uses uTHCD samples, We used 70–30% train–test split proportion, which is comparable to the number of samples in the HPL dataset.

4.2 Performance Metrics The performance metrics used in this work are accuracy, precision, and recall. Accuracy of the model refers to how many character samples are correctly recognized as their class in the total number of character samples. Precision corresponds to the number of correct predictions out of the total number of predictions. Recall specifies the number of correct predictions which are actually correct.

A New Model Evaluation Framework for Tamil Handwritten Character Recognition

101

4.3 Implementation The CNNs were implemented using TensorFlow deep learning framework in Python using Keras API. Models were trained and evaluated on a standalone machine with an i7-8700 CPU having a speed of 3.2GHz.

5 Results and Analysis This section presents the series of experiments conducted to demonstrate the effective use of two available standardized datasets for Tamil HWCR. Figure 3 presents the rate of convergence of CNN model for 50 epochs using the custom CNN model. The figure shows that the rate of convergence of uTHCD is slightly better than the HPL dataset. However, this experiment does not elucidate any specific quality of datasets to be particularly noticeable. In DNN, researchers strive for faster convergence of the model. However, the rate of convergence of optimizers alone is not important in DNN as each optimal gradient update taken towards global minimum cannot be quantified through any single feature (non-convexity, stopping criterion which is not based on distance measure, etc.) [19]. We then conducted some experiments to test the efficacy of these datasets independently as well as by cross-testing. In cross-testing, we trained the model using HPL and tested by uTHCD and vice versa. The HPL dataset consists of 82,928 training images, and a subset of these training images was given as test dataset. Hence we

Fig. 3 Rate of Convergence of model trained using HPLabs (orange) and uTHCD (blue)

102

B. R. Kavitha et al.

Table 3 Performance evaluation of various methods Methods Training Training Validation Data Acc Acc

Testing Data

Testing Acc

Custom CNN [3]

HPL

98.3

93.5

HPL

87.3

Custom CNN [3]

uTHCD

99.63

96.15

uTHCD HPL

67.4 66.2

VGG16

HPL uTHCD

97.42 92.32

88.46 92.05

uTHCD HPL uTHCD

94.9 94.8 93.73

decided to make a train–test split of 70–30 proportion, resulting in a set of 58,122 training images and 24,806 test images. The images were chosen randomly for the train and test set from the actual dataset. We used the same 70–30 train–test split up for the uTHCD dataset as well, which made a training set of 62870 images and 28080 test images. For both datasets, the training images were resized to 64 × 64 using the inter-area interpolation technique, and scaling was performed to normalize the values in the 0,1 range. The training images were again split into training and validation images with a 90–10% proportion. The custom CNN model was constructed with the hyperparameters as shown in Table 1 and the model was trained with the HPL dataset first and results were recorded for training accuracy, validation accuracy, training loss, and validtion loss. Then the model was tested with HPL test set and uTHCD test set, which gave an accuracy of 87.3% and 67.4%, respectively. Secondly, the model was trained with uTHCD dataset and tested with HPL and uTHCD datasets, which gave an accuracy of 66.2% and 94.9%, respectively. The training, validation, and testing accuracies of all the experiments are tabulated in Table 3. Samples in HPL dataset are all single-pixel width characters. None of that kind is adequately present in uTHCD train data; hence, accuracy was low when tested using uTHCD. On the other hand, the uTHCD test samples, especially online ones, closely resemble train samples of HPL data. This could be a reason for slight spike in accuracy when tested using HPL dataset. The accuracy of custom CNN, when trained and tested by the HPL dataset, resulted in good testing accuracy without overfitting problem. To the best of our knowledge, this is the highest accuracy achieved by any CNN in the literature. The model was custom designed for the HPL dataset [3]. However, the accuracy of the same model, when trained and tested by the uTHCD dataset, does suffer from the overfitting problem. The gap between training and testing accuracies could be curtailed through various techniques such as regularization, augmentation, etc. This would also result in enhanced test accuracy. The transfer learning approaches have an accuracy-generalization trade-off. The accuracy of the VGG16 model using uTHCD on test data is 93.73% which is similar to the custom CNN architecture [3]. However, the custom model suffered from

A New Model Evaluation Framework for Tamil Handwritten Character Recognition

103

overfitting problem, unlike the VGG16 model. The merit of using the transfer learning model lies in generalization capability. Usually, the transfer learning approaches overfit the data when there are lesser training samples. Since we have sufficiently large training samples, the transfer learning approach yielded better performance, thereby helping us to achieve on-par performance on test data without overfitting. However, its performance is lesser than the custom CNN model when tested using both datasets. This is mainly because of using the pretrained weights in the frozen layers. If the frozen layer is turned to trainable mode with a finely tuned learning late, it would probably yield best result [20]. The lower test accuracy obtained by CNN models for uTHCD dataset shows the underlying complexity represented by the uTHCD dataset. This is a useful feature because algorithms that enhance accuracy on this database would be robust in nature. The final set of experiments was conducted by training CNN models by simultaneously considering samples from both datasets. This experiment was conducted to see the generalization capability of the trained models. As mentioned earlier, this also has the benefit of learning from corner cases of multifaceted datasets useful in real-time applications. The intricate features learned from two different datasets can help in building a much more accurate and powerful model. The experiment was carried out by randomly picking 250 samples from each class of both the HPL and uTHCD datasets for all the 156 character classes. This creates a dataset of 78,000 samples per class from both datasets. These training images were again split up into training and validation images and were fed to the custom CNN model. After running 50 epochs for a batch size of 64, the model gave a training accuracy of 98.53% and a validation accuracy of 94.28%. The model was evaluated with HPL and uTHCD test datasets, which resulted in an accuracy of 99.2% and 95.4%, respectively. The results are reported in Table 4. The test data results were better than standalone and cross-testing experiments because the model absorbed multifaceted features from both datasets by making optimal use of mixed samples. In future, this paves the way for the development of robust algorithms which exploit the variations captured in both databases. In order to understand what features the model has learned, we used the Grad CAM activation mappings on each convolutional layer. The initial layers identified the boundaries and curves and eventually more specific features corresponding to each character were learned. It is evident from Fig. 4 that for the letter ’Neee’ the

Table 4 Performance evaluation of CNN Model using the mixed dataset Method Custom CNN

Training

Train

Validation Test

Test

#of Test

Misclassified

Data

Acc

Acc

Data

Acc

Samples

Samples

Mixed (HPL + uTHCD)

98.53

94.28

uTHCD

95.4

28080

1302

HPL

99.2

24806

189

104

B. R. Kavitha et al.

Fig. 4 GradCAM heatmaps for the character “Neee”. The first image is the test data character. The consecutive four images are activations in convolution layers and the last image is heatmap overlayed on actual character

Fig. 5 Precision and recall values for uTHCD test data and HPL test data on mixed dataset trained model

robust features were clearly visible at Conv5 and the heatmap overlayed over the input image explains the same. Finally, We have analyzed the misclassified characters based on the precision and recall values of the HPLabs test data and uTHCD test data. From Fig. 5, it is evident that 10% of the total number of classes (i.e., 16 out of 156 classes) have poorly performed with less than 0.90 precision and recall values for the uTHCD dataset, whereas, with the HPL dataset, none of the classes had precision and recall values less than 0.90. These results indicate the relatively easier HPL samples and complex uTHCD samples. The uTHCD samples represent real-time scenarios as they contain samples extracted from both physical and digital forms. In the case of HPL dataset, it contains only samples that are extracted through interpolation of online stroke coordinates which can not adequately represent the variations available in offline samples such as uneven variation of stroke thickness, discontinuity in stroke due to usage of different writing tools (pen, pencil, gelpen), inherent noise due to digitization and scanning of physical form, etc.

6 Conclusion In our previous work [2], we argued that having multiple standardized datasets would act as a catalyst in the advancement of that research field. To substantiate that claim, in this paper, we have demonstrated different ways of utilizing two standardized

A New Model Evaluation Framework for Tamil Handwritten Character Recognition

105

datasets—HPL and uTHCD—using transfer learning and a recently published state of the art [3] for Tamil HWCR. We tried the below three testing mechanisms in our experiments: 1. Evaluation of the model by independent training and testing using separate datasets. 2. Model evaluation through cross-testing where we train the model using one dataset and test it using the second dataset. 3. Model evaluation by considering samples simultaneously from both datasets. When evaluating the CNN models using separate datasets, reasonable test accuracy was obtained on both test datasets. However, one may encounter overfitting issues (as in the uTHCD dataset) which can be circumvented in several ways, such as regularization, data augmentation, etc. The models performed poorly when they were cross-tested for the obvious reason of testing using samples that were not seen during the training phase. Finally, we have also experimented the combination of two datasets for training the model, which has produced better test results. Experiments suggest that the uTHCD test samples are complex in nature as they consistently obtained less test accuracy, precision, and recall. Hence, as we advance for Tamil HWCR, we recommend using the uTHCD sample or mixed samples from both the HPL and uTHCD dataset for training. Training a model using either a complex or mixed dataset can learn subtle and intricate patterns useful in robust recognition of largely complicated tasks of Tamil HWCR.

References 1. Agrawal, M., Bhaskarabhatla, A.S., Madhvanath, S.: Data collection for handwriting corpus creation in Indic scripts. In: International Conference on Speech and Language Technology and Oriental COCOSDA (ICSLT-COCOSDA 2004), New Delhi, India (November 2004). Citeseer (2004) 2. Shaffi N, Hajamohideen F (2021) uTHCD: a new benchmarking for tamil handwritten OCR. IEEE Access 9:101469–101493 3. Kavitha B, Srimathi C (2022) Benchmarking on offline handwritten Tamil character recognition using convolutional neural networks. Journal of King Saud University-Computer and Information Sciences 34:1183–1190 4. Pal, U., Chaudhuri, B.: Indian script character recognition: a survey. pattern Recognition 37(9), 1887–1899 (2004) 5. Singh A, Bacchuwar K, Bhasin A (2012) A survey of OCR applications. International Journal of Machine Learning and Computing 2(3):314 6. Shaffi N, Hajamohideen F (2021) Few-Shot Learning for tamil handwritten character recognition using deep siamese convolutional neural network. In: Mahmud M, Kaiser MS, Kasabov N, Iftekharuddin K, Zhong N (eds) Applied intelligence and informatics. AII 2021. Communications in computer and information science, vol 1435. Springer, Cham. https://doi.org/10. 1007/978-3-030-82269-9_16 7. Siromoney G, Chandrasekaran R, Chandrasekaran M (1978) Computer recognition of printed Tamil characters. Pattern recognition 10(4):243–247

106

B. R. Kavitha et al.

8. Kowsalya S, Periasamy P (2019) Recognition of tamil handwritten character using modified neural network with aid of elephant herding optimization. Multimedia Tools and Applications 78(17):25043–25061 9. Jose, T.M., Wahi, A.: Recognition of tamil handwritten characters using daubechies wavelet transforms and feed-forward backpropagation network. International Journal of Computer Applications 64(8) (2013) 10. Vinotheni, C., Lakshmana Pandian, S., Lakshmi, G.: Modified convolutional neural network of tamil character recognition. In: Advances in Distributed Computing and Machine Learning, pp. 469–480. Springer (2021) 11. Vijayaraghavan, P., Sra, M.: Handwritten tamil recognition using a convolutional neural network. In: 2018 International Conference on Information, Communication, Engineering and Technology (ICICET). pp. 1–4 (2014) 12. Prakash, A.A., Preethi, S.: Isolated offline tamil handwritten character recognition using deep convolutional neural network. In: 2018 International Conference on Intelligent Computing and Communication for Smart World (I2C2SW). pp. 278–281. IEEE (2018) 13. Kaliappan, A.V., Chapman, D.: Hybrid classification for handwritten character recognition of a subset of the tamil alphabet. In: 2020 6th IEEE Congress on Information Science and Technology (CiSt). pp. 167–172. IEEE (2021) 14. Haritha, J., Balamurugan, V., Vairavel, K., Ikram, N., Janani, M., Indrajith, K., et al.: Cnn based character recognition and classification in tamil palm leaf manuscripts. In: 2022 International Conference on Communication, Computing and Internet of Things (IC3IoT). pp. 1–6. IEEE (2022) 15. Sornam, M., Vishnu Priya, C.: Deep convolutional neural network for handwritten tamil character recognition using principal component analysis. In: International Conference on Next Generation Computing Technologies. pp. 778–787. Springer (2017) 16. KO, M.A., Poruran, S.: OCR-nets: Variants of pre-trained CNN for Urdu handwritten character recognition via transfer learning. Procedia Computer Science 171, 2294–2301 (2020) 17. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014) 18. Singh S, Sharma A, Chauhan VK (2021) Online handwritten gurmukhi word recognition using fine-tuned deep convolutional neural network on offline features. Machine Learning with Applications 5:100037 19. Foret, P., Kleiner, A., Mobahi, H., Neyshabur, B.: Sharpness-aware minimization for efficiently improving generalization. arXiv preprint arXiv:2010.01412 (2020) 20. Kornblith, S., Shlens, J., Le, Q.V.: Do better imagenet models transfer better? In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 2661–2671 (2019)

Integrated Linear Regression and Random Forest Framework for E-Commerce Price Prediction of Pre-owned Vehicle Amit Kumar Mishra, Saurav Mallik, Viney Sharma, Shweta Paliwal, and Kanad Ray

Abstract The E-Commerce industry has taken the world in its stride. It has been growing at an exponential rate. This paper presents a machine learning-based model to predict the price of a used vehicle for selling and buying purposes. The model proposes two algorithms: Linear Regression and Random Forest Regression and draws a comparison between the two on the basis of standard performance measures. The uniqueness of this work is that it is not only capable to predict the price of a used vehicle but the model can also be extended with minimal effort for any kind of product across various spheres of the E-commerce industry. Keywords Regression · Machine learning · Random forest · Price prediction

A. K. Mishra Department of Computer Science and Engineering, Jain University, Bangalore, Karnataka, India S. Mallik (B) Department of Environmental Health, Harvard T H Chan School of Public Health, Boston, MA, USA e-mail: [email protected]; [email protected] V. Sharma Department of Computer Science and Engineering, Anand Engineering College, Agra, India S. Paliwal · K. Ray (B) School of Computing, DIT University, Dehradun, UK, India e-mail: [email protected] K. Ray Department of Physics, Electronics and Communication, Amity University, Jaipur, RJ, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_8

107

108

A. K. Mishra et al.

1 Introduction The advent of the pandemic has accelerated the need and demand of the e-commerce platforms making them available for satisfying most of the human needs. The global market is dominated by second-hand goods and hence price prediction plays a critical role. Pricing is calculated using statistical and experimental calculations. Price prediction helps in evaluating the future value of a particular product. With invent of high-performance computing systems, the field of machine learning has taken new heights. The idea behind this field is to make machines capable to learn from experience. Several algorithms such as Linear regression, Logistic regression, Bayesian learning, Random Forest regression, k-nearest neighbours and others are successfully applied for price prediction. There are several e-commerce companies that act as mediators between parties who are willing to sell and purchase pre-owned cars; thus they have a lot of data instances based on sales that have happened through them. Now if the price for a commodity is equitable that it can satisfy both the seller and buyer the job is done. Machine Learning helps in developing a prediction model for predicting the price of the cars based on various attributes associated with the car. Such models can somewhere help companies, buyers and sellers by substituting the attributes of the car that the seller is selling into this model and then comes up with a predicted price. Now in this scenario if it is much above what it is predicting then they can tell the seller that the price they are asking for is too much and they are not likely to sell this car at this price. In a similar manner when a buyer comes and set the bidding value for a car with some specific attributes at a much lesser price than what has been put up, then they could show the results of the model and tell the buyer the predicted results. Thus the proposed models optimize this transaction and then have both parties being happy and then have a better business and goodwill. The flow of the paper is as followed; the glimpse of background study in Sect. 2, research gap in Sect. 3, problem formulation in Sect. 4, result and analysis in Sect. 5 and Sect. 6 is followed by a conclusion and discussion.

2 Related Work Shehadeh et al. [1] proposed a framework integrated with three methods namely, Modified Decision Trees (MDT), LightGBM and XGBoost. The framework is designed for the prediction of construction cost of heavy-residual vehicles. On the basis of the coefficient, it has been discovered that MDT algorithm has higher prediction accuracy than the other two methods. The development of enhanced heating and ventilation system is based upon the thermal comfort offered by the cabin of the vehicle. The prediction related to thermal comfort is a critical task and hence Warey et al. [2] proposed a model based on computational fluid dynamics. The model exhibited an error percentage of less than 5%. Another important point to be taken into consideration is the battery usage in electric vehicles. The durability and reliability

Integrated Linear Regression and Random Forest Framework …

109

of battery is a foremost requirement and hence Chandran et al. [3] came up with an efficient approach for the prediction of cost effective lithium ion batteries. The model comprises of six different machine learning algorithms and out of them Artificial Neural Networks exhibited the highest performance with lowest error score. Samruddhi et al. [4] designed a model for used car price prediction using K- Nearest Neighbor and the model exhibited an accuracy of 85%. Yadav et al. [5] performed the analysis for determining the selling price of used car vehicles. The model is an integration of the clustering technique with Linear Regression and Random Forest. Das et al. [6] performed a study on different algorithms to identify the best algorithm that can be applied for the prediction of used car vehicles. The result shows that Support Vector Machine has provided the highest accuracy of 86.7%. Narayana et al. [7] state that predictive analytical models will serve as a boon for the car selling retail business and hence in order to prove this fact the researchers came up with a framework of integration of machine learning and regression techniques. Varshitha et al. [8] designed a working model with lesser error value. The working model is an integration of Artificial Neural Networks with Keras Regression and other machine learning algorithms; out of which Random Forest has exhibited the lowest value of mean squared error. Amiket al. [9] designed a model for the prediction of pre-owned vehicles by applying different algorithms of machine learning that include Linear Regression, Lasso Regression, Extreme Gradient Boosting and Decision Trees. The results concluded that the model is 91% more times accurate in the prediction of pre-owned vehicles. Selvaratnam et al. [10] designed an approach for the determination of important features required for the prediction of automobiles. The framework is an integration of Lasso and stepwise regression selection algorithms. Kriswantara et al. [11] designed a model based on Random Forest and the results are then compared with Linear Regression and Decision Tree. The lowest error values for the model come out to be 1.006. Furian et al. [12] designed a machine learning-based branch and price algorithm for sampled vehicle routing problems. The machine learning algorithms are used for the prediction of binary decision variables and branching scores. Reddy et al. [13] laid out a comparative study between Logistic Regression, Linear Regression and K-Nearest Neighbor for prediction of old/used car vehicles and concluded that K-Nearest Neighbor has outperformed the other two with high precision. Jin et al. [14] carried out the study for the prediction of price for second-hand vehicles using different regression techniques. The technique includes; Linear Regression, Polynomial Regression, Support Vector Regression, Random Forest Regression and Decision Tree based Regression. The result shows that Random Forest Regression yielded the highest value of R-Square of 0.90416. Fathalla et al. [15] proposed a deep end to end model for prediction of a vehicle’s price. The study includes Long Short term Memory and Convolutional Neural Network for price prediction and concluded that the proposed method achieved better mean absolute error accuracy as compared to Support Vector Machines. Siva et al. [16] designed an approach based on Linear Regression for the price prediction of vehicles. The results are compared with Support Vector Machine and the conclusion states that Linear Regression gave 91% more accurate results than Support Vector Machine. Monburinon et al. [17] laid out a comparative study of regression algorithms based

110

A. K. Mishra et al.

on supervised learning. The study concluded that Gradient Boosted Regression Tree yielded the best performance with a mean absolute error of 0.28. Ahtesham et al. [18] implemented machine learning algorithms on Apache Spark and the conclusion state that Gradient Boosting has yielded 89% accuracy on a dataset of Pak-Wheels. Han et al. [19] designed a framework using feature engineering and feature screening. The feature engineering is made up of correlation analysis and feature extraction. The algorithms that have been applied are Random Forest and XGBoost.

3 Research Gap Large number of studies have been undertaken by researchers in the field of data science, which pertains to a diversified set of applications, but still, there is room to have an integrated model which right from intensive data analysis to a generalized model, caters to wide range of applications. Comprehensive data pre-processing, which is the backbone of any data analytic task, especially in developing a machine learning model, should be accorded prime importance. This study aims at fulfilling this gap in order to provide a coherent model for regression tasks.

4 Problem Formulation and Methodology The dimension of a data set that has been taken for experimental purposes belongs to a publicly available domain. A total of 45000 *19 instances were considered out of which 19 attributes have been taken into consideration while predicting price. We propose a model which could be used for predicting the price of a pre-owned vehicle. The model has been trained and tested on two sets of data; One, when data rows with missing values have been omitted, and second, when missing values have been replaced with some appropriate values.

4.1 Machine Learning The field of machine learning deals with specialized algorithms which evolve with time and learn with experience. A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E [20]. Linear Regression is one of the models that predict the value of independent variables, given the values of one or more independent variables whereas Random Forest algorithm has its roots in Decision tree. It is used for classification as well as regression tasks. The features or attributes considered in the study are described in Table 1.

Integrated Linear Regression and Random Forest Framework …

111

Table 1 Features description table Feature

Data type

Description

Categories of variables

VehicleType

String

Types of cars

Various types available

Name

String

String consisting of car name, brand name

Combination of strings

DateOfAdvt

Date

When advt first appeared



Seller

String

Name of seller

Private, Commercial

OfferType

string

Has the buyer requested for an offer

Offer, Request

Price

Integer

Price of car

($)

Abtest

String

Two versions of advt

Test, Control

YOR

Integer

Year of registration

year

Gearbox

String

Type of gearbox

Manual, Auto

Power

Integer

Power of car



Model

String

Model of car

Various model available

Kilometer

Integer

odometer reading

Some number

MOR

Integer

Month of registration

1,2…12

FuelType

String

Fuel type

Petrol and six others

Brand

String

Make of car

Various car making companies

NotRepaired Damage

String

If repaired then Yes, otherwise No Yes, No

DateCreated

Date

Date at which advertisement was created

Date

PostalCode

Integer

Postal code of seller



LastSeen

Date

When a person saw advertisement last online



4.2 Process Flow Moving ahead, we think of a precise problem statement and then about some notion of problem conceptualization. Breaking down a problem statement into multiple smaller problem statements; and then identifying these smaller problem statements as what type of problems they are; either classification or function approximation problems. Thus, once we are able to solve the smaller problems, we put them in some logical arrangement of solution. Here we have used Python and Microsoft Excel as tools for data frame storage, designing classifiers and model visualization. Data set has been divided into 70:30 ratios for training and testing respectively. Once the model is ready, it will be tested on test data to check its accuracy. In this study, the problem statement is; for a large real time data set with sufficient number of features (features of the vehicle), we need to design a regression model for predicting price of the vehicle. The proposed framework is given by Fig. 1 whereas Fig. 2 describes the conceptualization of the problem. Data has been read from a (.csv) file and is

112

A. K. Mishra et al.

embedded into a data frame in Python. Since data obtained may have damaged values, it is important to perform data pre-processing thus the missing values and garbage values need to be identified and replaced with appropriate values as per algorithm. The detailed framework is continued below to give a deeper insight on the flow of the methodology and the implemented concept.

Problem Statement

Problem Conceptualization

One precise, Unambiguous, Lucid statement of Problem

Decomposition into Sub -problems

Flow Chart Solution Conceptualization

Visualization using graphs (box plot, scatter plot, histogram etc.)

Correlation among Attributes

Classification /Regression Method Identification Tools (Python, Excel)

Need optimization

Confusion Matrix Solution applicable on test data

Acceptable

Realization of solution

Model Ready for Future data

Fig. 1 Proposed framework outline for Regression task under study

Algorithms (machine learning and data analytic algorithms)

Mathematical Representation (function approximation, loss function, sigmoid function)

Integrated Linear Regression and Random Forest Framework … Fig. 2 Problem conceptualization

113

Real time dataset (45000*19)

Regression Technique (Supervised Learning)

Regression model to predict price of pre-owned Vehicle

• Problem Conceptualization: • Given various features of pre-owned vehicle, develop a regression model to predict its price. • Problem Characterization Regression: • Apriori Knowledge, Dependent variable, Continuous variable, Independent variable, Numerical, Categorical through Fig. 2.

5 Model Formulation and Result Matrix We have formulated two models, one with linear Regression and the other with Random Forest Regression. Before that data pre-processing has been done. The simplest form of linear regression model is- Y = b + w1 x1 + w2 x2 + ……. wm xm . We can see that in this model b, w1 , w2 all the way up to wm are the parameters, Y is the label and x1 , x2 up to xm are the input features. In this study, we are going to employ linear regression for formulating a model which could predict the price of pre-owned cars, given the various input features. We have taken two sets of data, one by omitting missing values and another by substituting appropriate values in place of missing values. Python has been used as a tool for model formation and verification. Root Mean Square Error (RMSE), which is a deviation of predicted and actual values and R Squared values, which is the proportion of the variation in the dependent variable that is predictable from the independent variables, are calculated to assess the performance of models. Table 2 gives the results after missing values and Table 3 describes the results when missing values are being substituted by other values.

114

A. K. Mishra et al.

Table 2 Results when missing values are omitted Base model R Squared RMSE

1.11

Linear regression

Random forest regression

Train set

Test set

Train set

Test set

0. 75

0. 73

0.89

0.79

0.46

0.35

Table 3 Results when missing values are substituted with appropriate values Base model R squared RMSE

1.15

Linear regression

Random forest regression

Train set

Test set

Train set

Test set

0.75

0.73

0.87

0.81

0.617

0.516

6 Conclusion and Discussion For an omitted data set, the R squared values for training data subset and testing data subset from Linear Regression model have been found to be 0.75 and 0.73, respectively, which are almost close to each other. R squared values of Random forest algorithm on the training subset and test data subset come out to be 0.89 and 0.79, respectively. RMSE value for the base model is 1.11 and for the data subset the RMSE values of Linear Regression and Random Forest Regression are 0.46 and 0.35 respectively Hence, it is clearly evident that Random Forest Regression has an edge over Linear Regression. For the imputed data set, the R squared values for the training data subset and testing data for Linear Regression model have been found to be 0.75 and 0.73 respectively whereas for Random Forest Regression model, R squared values on training subset and test data subset come out to be 0.87 and 0.81 respectively. RMSE value for base model is 1.15 and for this data subset, the calculated RMSE values for Linear Regression and Random Forest Regression are 0.617 and 0.516, respectively again putting Random Forest Regression above Linear Regression. Overall in both cases; where missing data records were omitted and where they were substituted with appropriate values, Random Forest regression has outperformed linear Regression.

References 1. Shehadeh A, Alshboul O, Al Mamlook RE, Hamedat O (2021) Machine learning models for predicting the residual value of heavy construction equipment: an evaluation of modified decision tree, LightGBM, and XGBoost regression. Autom Construct 129(1)5:103827. https:/ /doi.org/10.1016/j.autcon.2021.103827 2. Warey A, Kaushik S, Khalighi B, Cruse M, Venkatesan G (Feb.2020) Data-driven prediction of vehicle cabin thermal comfort: using machine learning and high-fidelity simulation results. Int J Heat Mass Transf 148(22):119083. https://doi.org/10.1016/j.ijheatmasstransfer.2019.119083

Integrated Linear Regression and Random Forest Framework …

115

3. Chandran V, Patil CK, Karthick A, Ganeshaperumal D, Rahim R, Ghosh A (2021) State of charge estimation of lithium-ion battery for electric vehicles using machine learning algorithms. World Electric Veh J 12(1), 38. https://doi.org/10.3390/wevj12010038 4. Samruddhi K, Kumar RA (2020) Used Car price prediction using k-nearest neighbor based model. Int J Innov Res Appl Sci Eng (IJIRASE) 4(2), 629–632. https://doi.org/10.29027/IJI RASE.v4.i2.2020.629-632 5. Yadav A, Kumar E, Yadav PK (March 2021) Object detection and used car price predicting analysis system (UCPAS) using machine learning technique. Linguistics Culture Rev 5(2):1131– 1147. https://doi.org/10.21744/lingcure.v5nS2.1660 6. Das Mou A, Saha PK, Nisher SA, Saha A (2021) A comprehensive study of machine learning algorithms for predicting car purchase based on customers demands. In: 2021 International conference on information and communication technology for sustainable development (ICICT4SD, pp 180–184. https://doi.org/10.1109/ICICT4SD50815.2021.9396868 7. Narayana CV, Likhitha CL, Bademiya S, Kusumanjali K (2021) Machine learning techniques to predict the price of used cars: predictive analytics in retail business. In: 2021 second international conference on electronics and sustainable communication systems (ICESC), pp 1680–1687. https://doi.org/10.1109/ICESC51422.2021.9532845 8. Varshitha J, Jahnavi K, Lakshmi C (2022) Prediction of used car prices using artificial neural networks and machine learning. In: 2022 international conference on computer communication and informatics (ICCCI), pp 1–4. https://doi.org/10.1109/ICCCI54379.2022.9740817. 9. Amik FR, Lanard A, Ismat A, Momen S (Dec.2021) Application of machine learning techniques to predict the price of pre-owned cars in Bangladesh. Information 12(12):514. https://doi.org/ 10.3390/info12120514 10. Selvaratnam S, Yogarajah B, Jeyamugan T, Ratnarajah N (2021) Feature selection in automobile price prediction: an integrated approach. International research conference on smart computing and systems engineering (SCSE) 2021:106–112. https://doi.org/10.1109/SCSE53 661.2021.9568288 11. Kriswantara B, Sadikin R (2022) Machine learning used car price prediction with random forest regressor model. J Inf Syst Inf Comput 6(1):40–49. https://doi.org/10.52362/jisicom.v6i 1.752 12. Furian N, O’Sullivan M, Walker C, Çela E (2021) A machine learning-based branch and price algorithm for a sampled vehicle routing problem. OR Spectrum, vol 43, no 3, pp 693–732, Jan 2021. https://doi.org/10.1007/s00291-020-00615-8 13. Reddy A, Kamalraj R (2021) Old/Used cars price prediction using machine learning algorithms. IITM J Manag IT 12(1):32–35 14. Jin C (2021) Price prediction of used cars using machine learning. IEEE international conference on emergency science and information technology (ICESIT) 2021:223–230. https://doi. org/10.1109/ICESIT53460.2021.9696839 15. Fathalla A, Salah A, Li K, Francesco V (2020) Deep end-to-end learning for price prediction of second-hand items. Knowl Inf Syst 62(12) , 4541–4568. https://doi.org/10.1007/s1011502001495-8 16. Siva R, Adimoolam M (2022) Linear regression algorithm based price prediction of car and accuracy comparison with support vector machine algorithm. ECS Trans 107(1), 12953. https:/ /doi.org/10.1149/10701.12953ecst 17. Monburinon N, Chertchom P, Kaewkiriya T, Rungpheung S, Buya S, Boonpou P (2018) Prediction of prices for used car by using regression models. In: 2018 5th international conference on business and industrial research (ICBIR), pp 115–119. https://doi.org/10.1109/ICBIR.2018. 8391177 18. Ahtesham M, Zulfiqar J (2022) Used car price prediction with pyspark. In: International conference on digital technologies and applications, vol 454, pp 169–179, May. 2022. https://doi.org/ 10.1007/978-3-031-01942-5_17 19. Han S, Qu J, Song J, Liu Z (2022) Second-hand car price prediction based on a mixed-weighted regression model. In: 2022 7th international conference on big data analytics (ICBDA), pp 90–95. https://doi.org/10.1109/ICBDA55095.2022.9760371

116

A. K. Mishra et al.

20. Learning M (1997) Tom M. McGraw-Hill, Mitchell 21. Mallik S, Zhao Z (2020) Graph- and rule-based learning algorithms: a comprehensive review of their applications for cancer type classification and prognosis using genomic data. Brief Bioinform 21(2):368–394. https://doi.org/10.1093/bib/bby120 22. Bandyopadhyay S, Mallik S, Mukhopadhyay A (2013) A survey and comparative study of statistical tests for identifying differential expression from microarray data. IEEE/ACM Trans Comput Biol Bioinf 11(1):95–115. https://doi.org/10.1109/TCBB.2013.147 23. Roy A, Banerjee S, Bhatt C, Badr Y, Mallik S (2018) Hybrid group recommendation using modified termite colony algorithm: a context towards big data. J Inf Knowl Manag 17(2):1850019. https://doi.org/10.1142/S0219649218500193 24. Mallik S, Grodstein F, Bennett DA et al (2022) Novel epigenetic clock biomarkers of agerelated macular degeneration. Front Med, 16 June 2022. https://doi.org/10.3389/fmed.2022. 856853 25. Li A, Xiong S, Li J, Mallik S et al (2022) AngClust: angle feature-based clustering for short time series gene expression profiles. IEEE/ACM Trans Comput Biol Bioinf. https://doi.org/10. 1109/TCBB.2022.3192306 26. Mallik S, Seth S, Bhadra T, Zhao Z (2020) A linear regression and deep learning approach for detecting reliable genetic alterations in cancer using DNA methylation and gene expression data. Genes, MDPI 11(8):931. https://doi.org/10.3390/genes11080931

Personalized Recommender System for House Selection Suneeta Mohanty, Shweta Singh, and Prasant Kumar Pattnaik

Abstract Housing is the key to improved health and welfare. Hence it is required to select the house wisely. In this paper, TOPSIS method is used for the ranking of various alternatives of houses as per the user’s personalized requirements and demand. Keywords Houses · TOPSIS · AHP · MCDM technique · PIS · NIS

1 Introduction Houses are not only the means of providing shelter but more than that where we spend most qualitative time of our lives. “A safe, settled home is the cornerstone on which individuals and families build a better quality of life, access services they need and gain greater independence” as per Jake Eliot [14]. Now a day’s house selection is a big challenge for the people due to varieties of features and design available that create lots of confusion for the buyers. Buyers get confused because there are so many criteria to get compared with each other while making choices and these comparisons are complex. A recommender system is a method used for information search that helps the user to get items recommendation as per their requirements. A recommender system is implemented using one of the MCDM techniques to provide good and efficient suggestion for the items to the user [1, 2, 7]. Thus, a recommender system is a good support for the user to select the appropriate alternative according to their personalized requirements in less time and without putting much effort. S. Mohanty (B) · S. Singh · P. K. Pattnaik (B) School of Computer Engineering, KIIT Deemed to Be University, Bhubaneswar, OR, India e-mail: [email protected] P. K. Pattnaik e-mail: [email protected] S. Singh e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_9

117

118

S. Mohanty et al.

1.1 Challenges with a House Selection In advance technical world, there exist number of technologies and facilities that provides opportunities for builders to build more advanced and wide varieties of suitable houses. But this advancement also creates a lot more confusion for the buyers to select the most appropriate one according to their needs and requirements. Since house is the most basic and important need of every person, which can be afforded by most of the people once in a lifetime so there is big concern about the house selection. This is becoming a cumbersome task because of number of features, designs, cost and other facilities provided by the society. Therefore, house selection is really a big challenge for the buyers these days and this is becoming more complex day by day [3].

1.2 The Need of Recommender System for House Selection The challenges that generally buyers are facing is to decide the best house according to their requirements. But due to large numbers of designs, facilities, and features available, while selecting a house, this comparison becomes very tuff and is really complex to do by the buyers. So recommender system is great support for the buyers to do these complex comparisons and list the number of alternatives according to their needs.

2 Literature Survey The Multiple-Criteria Decision Making (MCDM) method is used in those scenarios where multiple criteria to be taken into consideration for making a decision. TOPSIS (The technique for an order of preference by similarity to ideal solution) and AHP (Analytic Hierarchy Process) are the two most used methods in solving problem with multiple criteria and taking a decision as per the user’s requirements [13]. Srikrishna et al. [3] conducted a research work to analyze the purchase trends of automobiles, that is to say more particularly on the market, is a very difficult task for the customers because of the daily modifications of the various specifications of technical and operational parameters. TOPSIS is a technique of selection of item, which is used for the problem here. This method gives a basis for making a decision where the number of options is limited, but every option has a large number of attributes [3]. Singh and Pattnaik [2] presented a hierarchical approach that addresses the difficulties faced by the users while selecting a mobile phone due to varieties of mobile phones available in the market. AHP method is used for the calculation of usability score of mobile phones. Usability value helps the system to rank the item according

Personalized Recommender System for House Selection

119

to user’s requirements that enable the user to select the best alternative with less effort [2]. Balioti et al. [11] implemented the TOPSIS method under fuzzy environment to overcome the problem of spillway selection because of the uncertainty in its evaluation and its random selection may result in complex issues. The proposed method helps to take the decision for the optimal spillway selection under complex situations and circumstances [11]. Sivaramakrishnan et al. [12] developed online organ donation and blood bank system for the hospitals to save the patient life in case of emergencies. Information related to the organ donation and blood bank are need to maintained using MySQL and distance calculation is done using haversine algorithm, RVD (Regular Voluntary Donor) algorithm is used to select the appropriate donor and TOPSIS method is used for arranging the blood banks according to their ranks [12].

3 Recommender System Using MCDM Method Recommender system has been implemented in the situation where multiple criteria are used to do the comparison for the selection of an item, therefore, we are using one of the MCDM (multiple criteria decision making) technique for the item selection [9]. TOPSIS is one of the MCDM techniques, which is discussed in Sect. 3.1.

3.1 TOPSIS (the Technique for an Order of Preference by Similarity to Ideal Solution) TOPSIS is one of the MCDM method designed by Yoon and Hwang [3–5]. It says that the alternatives will be ranked based on their geometric distance from Negative Ideal Solution (NIS) and Positive Ideal Solution (PIS). The one nearest to PIS and Farthest to NIS is chosen. Nearest to PIS means maximum profit with minimum cost and vice-versa for NIS. It assumes that the attributes are monotonically decreasing or increasing. It provides very realistic decision as it is a compensatory aggregation method which does not give priority to a single major attribute but considers contribution from each attribute and gives an aggregated score based on them. This method helps the user to select the item close to their requirements and also save users from doing a complex comparison that ultimately saves their time and gives them a good recommendation. Following steps are used to select and rank the items in TOPSIS method. STEP 1. Decision matrix is created for the selected criteria. STEP 2. Normalization matrix is created for each decision matrix. STEP 3. Weight matrix is created for each criterion whose weights can be derived from AHP method.

120

S. Mohanty et al.

Fig. 1 Design of criteria hierarchy for alternatives

STEP 4. Identify PIS and NIS from the previously designed weight matrix. STEP 5. Calculate all alternative distance from the identified NIS and PIS. STEP 6. Evaluation of each alternatives closeness from the ideal solution. STEP 7. Compare and rank all the alternatives. The diagrammatic representation of TOPSIS method is shown in Fig. 1:

4 Methodology Purchasing a new house decision-making and it also reflects user preferences. User choice is made from various houses having different features therefore, it is important to compare their characteristics in a systematic manner for good alternative selection using their characteristics. Some main criteria of a house are design, location, cost, life span and society amenities. In this paper, we have used TOPSIS method to rank various alternatives of houses. This helps the buyer for the selection most appropriate alternative according to their needs and requirements. Five major attributes of houses and their values have been updated in the characteristic table [2, 3, 5, 6, 8, 10]. TOPSIS is a simple and effective 7 steps process to rank the alternatives, which is discussed below: Step 1. Creation of evaluation/decision Matrix (M): From the characteristic values obtained from Table 1, which provides us with m alternatives and n attributes. Decision matrix of size m*n is created (assuming monotonous behavior of all different attributes) as shown in Table 2. M = (xi j )mxn

(1)

Step 2. Normalization of decision matrix (P). Since different parameters are calculated on different scale and dimensions as it is a multi-criteria problem, this normalization makes the different criteria uniform and mathematically comparable. The decision matrix in Table 2 is normalized and represented in Table 3.

Personalized Recommender System for House Selection

121

Table 1 Characteristics value Characteristics

Alternative House1

House2

House3

House4

Location

Near Airport

Near city market

City Outskirts

Near Phoenix mall

Design

Modern

Average

Good

Excellent

Cost (in Lakhs)

95–101

88–95

72–77

78–83

Life Span (in Years)

70

90

60

80

Society Amenities

Excellent

Good

Better

Average

Table 2 Decision matrix Alternative

Criteria Location

Design

Cost (in Lakhs)

Life span (in Years)

Society amenities

House1

8

9

9

7

10

House2

9

7

8

9

8

House3

6

8

6

6

9

House4

8

10

7

8

7

Table 3 Normalized decision matrix Alternative

Criteria Location

Design

Cost (in Lakhs)

Life span (in Years)

Society amenities

House1

0.511

0.525

0.593

0.462

0.583

House2

0.575

0.408

0.528

0.593

0.467

House3

0.383

0.467

0.396

0.396

0.525

House4

0.511

0.583

0.462

0.528

0.408

The formula for normalization is: xi j pi j =  m

2 n=1 x n j

; where (1 ≤ i ≤ m, 1 ≤ j ≤ n) P = ( pi j )mxn

(2)

(3)

Step 3: Weighted Decision Matrix calculation (Q): Since every attribute of an object may not be of equal importance so, a weight is given to every attribute based on their relevance or impact. AHP, which is another method for solving MCDM techniques related problems may be used for the identification of these weights [2].

122

S. Mohanty et al.

The weights are first normalized. i.e. Wi wi = n l=1

Wl

(Where wi = Actual weight identified for an attribute)

(4)

And then the weighted values are calculated just by multiplying the normalized weight to matrix values. Q i j = pi j ∗ w j where (1 ≤ i ≤ m, 1 ≤ j ≤ n)

(5)

Step 4: Obtain the Positive/Best Ideal Solution and Negative/Worst Ideal Solution. Identification of the worst solution (Sw ) and best solution (Sb ) can be done using the following equations:   Sb = qb1 , qb2 , qb3, qb4 . . . , qbn , where   qbj = max qij |i = 1, 2, 3, 4, 5.... m if j ∈ J+ ; minimum(qij |i = 1, 2, 3, 4, 5.... m)if j ∈ J− ;

(6)

  Sw = qw1 , qw2 , qw3, qw4 . . . , qwn , where   qwj = min qij |i = 1, 2, 3, 4, 5.... m if j ∈ J+ ; maximum(qij |i = 1, 2, 3, 4, 5.... m)if j ∈ J− ;

(7)

where J− is linked to attributes having non-advantageous impact And J+ is linked to attributes having advantageous impact. Extreme conditions i.e. PIS and NIS is calculated for each criteria using weight matrix which is shown in Table 4. Step 5: Calculation of Euclidian Distance from extreme condition (Table 5). The Euclidian distance from best solution is calculated as follows: Table 4 Weighted normalized decision matrix Alternative

Criteria Location

Design

Cost (in Lakhs)

Life span (in Years)

Society amenities

Original weight

0.8

0.7

0.9

0.7

0.6

Normalized weight

0.216

0.189

0.243

0.189

0.162

House1

0.11

0.099

0.144

0.087

0.094

House2

0.124

0.077

0.128

0.112

0.076

House3

0.083

0.088

0.096

0.075

0.085

House4

0.11

0.11

0.112

0.1

0.066

Personalized Recommender System for House Selection Table 5 PIS and NIS values for each criterion

diw =

Criteria

123

Extreme condition’s weighted attributes Positive ideal solution(Sb )

Negative ideal condition(Sw )

Location (max is Better)

0.124

0.083

Design (Max is Better)

0.11

0.077

Cost (in Lakhs) (Min is 0.096 better)

0.144

Life span (in Years) (Max is better)

0.112

0.075

Society amenities (Max 0.094 is better)

0.066

 n

 2 qi j − qwj

; where (1 ≤ i ≤ m)

(8)

The distance from best solution is calculated as follows:   2 n dib = qi j − qbj ; where (1 ≤ i ≤ m)

(9)

j=1

j=1

Step 6. Evaluation of each alternative closeness from ideal solution. This relative position gives us an idea of where our alternative stands compared to PIS. ri =

diw diw + dib

(10)

which signifies how much our alternative is far from NIS or closer to PIS. Relative closeness of each alternative from ideal solution is evaluated using the Table 6 and it is represented in Table 7. Step 7: Ranking of alternatives. Alternatives are ranked on the basis of their relative distance from NIS, which is calculated in the above Table 7. From the above calculation, it has been identified Table 6 Euclidian distance from ideal solution

Alternatives Euclidian distance from Euclidian distance from PIS NIS (Db ) (Dw ) House1

0.056974

0.046271

House2

0.049366

0.058361

House3

0.060125

0.052783

House4

0.037148

0.058881

124 Table 7 Relative closeness to NIS

Table 8 Ranking table

S. Mohanty et al.

Alternatives

Relative closeness from NIS(Dw /(Dw + Db ))

House1

0.45

House2

0.54

House3

0.47

House4

0.61

Alternatives

Ranking

House4

1

House2

2

House3

3

House1

4

that House4 is the most and House1 is the least appropriate alternative considering user requirements for various criteria (Table 8). Ranking order: House4 > House2 > House3 > House1. The ranking order of the alternatives is given in above table where House4 is most preferable and House1 is least preferable according to user’s requirements.

5 Conclusion In this paper TOPSIS method is implemented which is one of the MCDM technique for the selection of appropriate house as per the user requirements and needs. Multiple features of the house are selected which plays an important part in the process and decision is taken place using these criteria for the evaluation of final value. Ranking of various alternatives of houses is done using their characteristics. Relative closeness of each alternative from the ideal solution has been calculated for the ranking. The ranking of alternatives in the above calculation indicates that House4 gives the best match and House1 gives the worst match according to user requirements. Therefore, House4 is the most recommended house for the user as derived by above calculation.

References 1. Viljanac V (2014) Recommender system for mobile applications, June 2014 2. Singh S, Pattnaik PK (2018) Recommender system for mobile phone selection. Int J Comput Sci Mob Appl (IJCSMA) 3. Srikrishna S, Vani S, Reddy S (2014) A new car selection in the market using TOPSIS technique. Int J Eng Res Gen Sci (IJERGS)

Personalized Recommender System for House Selection

125

4. Yonghong H (2002) Mathematics in practice and theory, the improvement of the application of TOPSIS method to comprehensive evaluation, July 2002 5. Zoran M (2010) Modification of TOPSIS method for solving multicriteria tasks. Yugoslav J Oper Res (YUJOR) 6. Ayu MA, Al-Azab FGM (2010) Web-based multi-criteria decision making using AHP method. Int J Eng Adv Technol (IJEAT) 7. Roy S, Mohanty S, Mohanty S (2018) An efficient hybrid MCDM based approach for car selection in automobile industry. In: 2018 international conference on research in intelligent and computing in engineering (RICE). IEEE, pp 1–5 8. Wang C, Nguyen VT, Duong DH, Do HT (2018) A hybrid fuzzy analytic network process (FANP) and data envelopment analysis (DEA) approach for supplier evaluation and selection in the rice supply chain. Symmetry 10:22 9. Mohanty S, Roy S, Ganguly M, Pattnaik PK (2019) Risk assessment for project construction based on user perspective: an experimental analysis using AHP. Int J Innov Technol Explor Eng (IJITEE) 8(9):2944–2952. ISSN: 2278–3075 10. Biswas P, Giri BC, Pramanik S (2015) TOPSIS method for multi-attribute group decisionmaking under single-valued neutrosophic environment. Nat Comput Appl Forum (NCAF) 11. Balioti V, Tzimopoulos C, Evangelides C (2018) Multi-criteria decision making using topsis method under fuzzy environment. In: Application in spillway selection, EWaS international conference, July 2018 12. Sivaramakrishnan N, Ragavedhni K, Priyasindhu G, Subramaniyaswamy V, Vaishali S (2018) Recommendation system for blood and organ donation for the hospital management system. Int J Pure Appl Math (IJPAM) 13. Singh S, Mohanty S, Pattnaik PK (2022) Agriculture fertilizer recommendation system. In: International conference on smart computing and cyber security: strategic foresight, security challenges and innovation. Springer, Singapore, pp 156–172 14. https://www.theguardian.com/society-professionals/2014/aug/08/housing-problems-affecthealth

Healthcare Informatics

Epileptic Seizure Detection from EEG Signal Using ANN-LSTM Model Redwanul Islam, Sourav Debnath, Reana Raen, Nayeemul Islam, Torikul Islam Palash, and S. K. Rahat Ali

Abstract Epilepsy is a well-known neurological disease caused by malfunctioning nerve activity in the brain. These malfunctioning causes episodes called seizures. Seizures in epileptic patients involve uncontrollable movements, loss of sensation, convulsions, and loss of consciousness, which can result in catastrophic injury and even death. Therefore, a computerized seizure recognition system is important to protect epilepsy patients from the risk of seizures. The main reason for this disorder is still unknown. Though the symptoms associated with seizures can be treated manually and the accuracy of the diagnosis depends on the experience of the technician. In this paper, we presented an artificial intelligence-based approach where time– frequency characteristics of EEG signals are used to detect an epileptic seizure. Electroencephalography (EEG) is widely recognized for the diagnosis and evaluation of brain activity and disorders. We preprocessed the EEG signal and converted it into time–frequency information, then passed it through ANN-LSTM architecture for training. The proposed model achieves 99.46% classification accuracy with higher R. Islam (B) · R. Raen · T. I. Palash · S. K. R. Ali Department of Biomedical Engineering, Khulna University of Engineering & Technology, Khulna, Bangladesh e-mail: [email protected] R. Raen e-mail: [email protected] T. I. Palash e-mail: [email protected] S. K. R. Ali e-mail: [email protected] S. Debnath Department of Electronics and Communication Engineering, Khulna University, Khulna, Bangladesh e-mail: [email protected] N. Islam Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_10

129

130

R. Islam et al.

overall sensitivity (99.78%) and specificity (99.13%) and outperformed other similar methods. The findings of the study indicate that our technique has the potential for clinical application. The effectiveness of this strategy may be further assessed by combining it with other epileptic datasets. Keywords Epilepsy · Seizure · EEG signal · Artificial neural network (ANN) · Long short-term memory (LSTM)

1 Introduction Epilepsy can be defined as a neurological brain illness characterized by the occurrence of epileptic seizures triggered by aberrant brain activity. It affects at least 50 million people globally, with 2.4 million people diagnosed with epilepsy each year [1]. Epilepsy affects more than one percent of the world’s population; however, it can usually be treated with anticonvulsants or surgery [2]. People of the older age are the most afflicted, whereas nowadays young people also suffer from epilepsy [3]. Seizures are the most common symptom of epilepsy and are characterized as a transient change in movement, behavior, sensation, or consciousness that lasts from a few seconds to a few minutes. A seizure occurs when the human brain begins to produce four times greater signal than the typical signal in the brain, causing a group of brain cells to occur an unexpected, sudden discharge. Because of recording the electrical activity of the brain, the EEG is a crucial test for diagnosing epilepsy. It is a painless method that captures electrical impulses from the brain in the form of wavy lines, allowing aberrant spikes or patterns to be swiftly examined, allowing seizures or other brain problems to be identified [4]. Physicians classify epileptic patients’ brain activity into four states based on EEG recordings: preictal state (the period just before the seizure), interictal state (the period during the seizure), postictal state (the period after the seizure), and interictal state (the period after the seizure) [5]. Epilepsy has a significant psychological and social impact, and the unpredictable nature of seizure periods can make it a life-threatening condition. As a result, seizure prediction will significantly improve the quality of life of epilepsy patients in a variety of ways, including providing an early warning before a seizure starts so that appropriate action can be taken, developing new treatments, and establishing new strategies to better understand the nature of the disease [6]. Based on the aforementioned categorization of the epileptic patient’s brain processes, the seizure prediction problem could be viewed as a classification challenge between preictal and interictal brain states [7]. The focus of this study is to propose a deep learning-based system that integrates the ANN and LSTM networks to identify epileptic seizures automatically. ANN is employed in the proposed system to extract features, and LSTM is used to categorize epilepsy based on those features. As a result, combining ANN and LSTM layout features considerably enhances categorization.

Epileptic Seizure Detection from EEG Signal Using ANN-LSTM Model

131

2 Literature Review Research on epilepsy is still ongoing. Some of them are highlighted below which are helpful for our work. Lasefr et al. [8] propose to apply a new approach to detect epileptic signals by discrete wavelet transform for feature extraction and threshold processing to eliminate noisy parts. They also applied an SVM classifier to classify epileptic seizure detection. They achieved 98.1% accuracy in their study. Fani et al. [9] represented an automated epilepsy detection technique. They use the frequencies and energies of different sub-band EEG signals for the extraction of features from the data. These features were applied to Artificial Neural Network and got 94% accuracy. HadjYoucef et al. [10] proposed a method to detect epileptic during seizure-free periods. They used maximum, minimum, STD, entropy, range, and energy as their features and apply the SVM method. They found 99% accuracy. Liu et al. [11] reported a unique automated epilepsy detection method based on wavelets. At the specified scales, effective characteristics such as relative amplitude, relative energy, fluctuation index, and coefficient of variation are retrieved and supplied to the SVM for training and prediction. They were 95.33% accurate. Gomezgill et al. [12] suggested that EEG signals be recorded both in the time domain and frequency domain and a Chebyshev filter be applied to preprocess the signal. Wavelet analysis was then used to decompose the filtered signal where five subbands were created both in the time domain and the frequency domain. They received a 98.1% accuracy with the SVM classifier.

3 Methodology This section explains the proposed work’s methodology. Data gathering, data description, dataset preparation, machine learning classifier implementation, prediction, and performance analysis comprise the methodology section. Figure 1 depicts the work. Fig. 1 Schematic working process

132

R. Islam et al.

3.1 Dataset Collection and Description This work used well known and publicly available epilepsy dataset from Bonn University, Germany [13]. There are five folders in this collection, each with 100 files representing one subject. Each file in the collection is a 23.6 s recording of brain activity. The EEG data were sampled at 173.61 Hz, and all of the data was run through a 40 Hz low-pass filter. Table 1 represented the description of the EEG dataset. Typical EEG signals for the epileptic and not epileptic conditions are represented in Fig. 2. Table 1 Description of EEG dataset Dataset

Description

Class A

Recording of the EEG data during eyes open

Class B

Recording of the EEG data during eyes closed

Class C

EEG data was recorded from the healthy brain though the tumor was present

Class D

EEG data was recorded from the tumor region

Class E

Recording of the EEG data during seizure activity

Fig. 2 EEG data for a, b epilepsy and c, d not epilepsy

Epileptic Seizure Detection from EEG Signal Using ANN-LSTM Model

133

3.2 Data Preprocessing The time-series data corresponding to the EEG is sampled at 4097 data points. Each data point represents a distinct EEG record value at a different time. There are 500 subjects in total, each with 4097 data points collected in 23.5 s. Each chunk contains 178 data points per second, and each data point represents the value of the EEG record at different time intervals. We now have 23 × 500 = 11,500 pieces of data, each with 178 data points per second, and the last column contains the data label. Figure 2 represents the visualization of some sample signals. All of the subjects in classes A, B, C, and D do not have seizures. Seizures affect only Class E people. Label 0 for classes A, B, C, and D, which indicate the individual does not have epilepsy, and 1 for class E, which shows the subject does have epilepsy. This creates an imbalance in the dataset. The oversampling method was used to eliminate the imbalance of datasets.

3.3 Artificial Neural Network (ANN) Artificial Neural Networks (ANN) are soft computing tools that can recognize patterns and anticipate outcomes. A mathematical model for simulating a biological network is defined by ANN. An artificial neural network (ANN) is a collection of artificial neurons that can take an input, change their internal state in reaction to it, and calculate an output depending on the input and the internal state. The weights of these artificial neurons can be modified through a learning process [14]. Figure 3 depicts a general neural network model that uses the output of the previous layer as input to the current layer to calculate the values of the output layer from the input layer in a sequential manner. Fig. 3 Illustration of ANN architecture

134

R. Islam et al.

Using the following formulation which the ANN follows:     y = f f xi wi j + b j w jk + bk

(1)

The input variable x i is multiplied by weight wij and summed with bias bj , f (.) is the activation function, the output of this layer is the input of the next layer, and the result y is the forecast value, as shown in Eq. (1).

3.4 Long Short-Term Memory (LSTM) LSTM architecture was designed primarily to solve the long-term reliance issue that recurrent neural networks RNNs confront. LSTMs vary from more standard feedforward neural networks in that they feature feedback connections. This property allows LSTMs to process whole data sequences without considering each point in the series individually, instead of storing important information about earlier data points in the sequence to contribute to the processing of incoming data points. As a result, LSTM is excellent in processing data sequences such as text, audio, and general time series [15]. Figure 4 depicts the structure of the LSTM model. Equations (2)–(6) explain the equations for an LSTM unit:   f t = s W f x xt + W f h h t−1 + b f

(2)

i t = s(Wi x xt + Wi h h t−1 + bi )

(3)

Ot = s(Wox xt + Woh h t−1 + bo )

(4)

Ct = f t  Ct−1 + i t  tanh (Wcx xt + Wch h t−1 + bc )

(5)

h t = Ot  tanh (Ct )

(6)

As Fig. 4 illustrates, the LSTM includes a memory cell (C t ) and three gates: an input gate (it ), a forget gate ( f t ), and an output gate (Ot ). The initial values in Eqs. (2)–(6) are C0 = 0 and h0 = 0, and the operator  signifies the element-wise product (i.e., Hadamard product). At time t, x t represents the input vector, and ht represents the hidden state vector, also known as the LSTM unit’s output vector. Weight matrices and bias parameters, W and b, must be learned during training. The sigmoid function is σ (.) while the hyperbolic tangent function is tanh(.).

Epileptic Seizure Detection from EEG Signal Using ANN-LSTM Model

135

Fig. 4 Illustration of LSTM architecture

3.5 Proposed ANN-LSTM Model In this work, we built a hybrid model that automatically detects epilepsy based on EEG data. The structure of this architecture is created by integrating the ANN and the LSTM network, where the ANN extracts important functionality from the input and the LSTM acts as a classifier. Table 2 shows the proposed integrated epilepsy detection network. The network consists of two convolutional layers, two pooling layers, three dropout layers, two LSTM layers, two dense layers, one FC layer, and an output layer with sigmoid functions. Hyperparameter optimization or tuning is the process of determining a set of ideal parameters to improve the performance of a learning system [16]. The Adam optimizer was employed in this investigation with learning rate = 0.001, beta_1 = 0.9, and beta_2 = 0.999. The loss function was Binary Cross entropy, and the model Table 2 The full summary of ANN-LSTM architecture Layer (type)

Output shape

convid_2 (Conv1D )

(None, 178,256)

max_pooling1d_2 (MaxPooling 1D)

(None, 89,256)

dropout 2 (Dropout)

(None, 89,256)

convid 3 (Conv1D)

(None, 89,128)

max_pooling1d_3 (MaxPooling 1D)

(None ,44,128)

dropout 3 (Dropout)

(None ,44,128)

1stm_2 (LSTM)

(None, 44,64)

1stm_3 (LSTM)

(None, 32)

flatten 2 (Flatten)

(None, 32)

dense 4 (Dense)

(None, 250)

dropout 4 (Dropout)

(None, 250)

dense 5 (Dense)

(None, 1)

136

R. Islam et al.

metrics were accuracy. Using the trial-and-error approach, appropriate batch size and epochs were calculated. The proposed model was trained using batch size = 128 and epochs = 50.

3.6 Assessment Metrics The suggested method’s competency was assessed using accuracy, precision, sensitivity, specificity, and f-1 score [16]. The number of successfully classified occurrences divided by the total number of test cases equals accuracy. In genuinely positive cases, precision is defined as the percentage of accurately detected instances. The proportion of correctly identified negative data samples is determined by specificity, whereas the proportion of accurately discriminated positive data samples is estimated by sensitivity. The f-1 score is an average of expected precision and revocation that takes both accuracy and revocation into account [17–19]. The following formulae (7)–(11) are used to compute accuracy, precision, sensitivity, specificity, and f-1 score: Accuracy =

TruePositive(TP) + TrueNegative(TN) TruePositive(TP) + TrueNegative(TN) + FalsePositive(FP) + FalseNegative(FN)

(7)

Precision =

TruePositive(TP) TruePositive(TP) + FalsePositive(FP)

(8)

Sensitivity =

TruePositive(TP) TruePositive(TP) + FalseNegative(FN)

(9)

Specificity =

TrueNegative(TN) TrueNegative(TN) + FalsePositive(FP)

(10)

f-1 score = 2 ∗

Precision ∗ Sensitivity Precision + Sensitivity

(11)

4 Results and Discussions 4.1 Experimental Setup The dataset was divided into eighty percent for training and twenty percent for testing in the experiment. The five fold cross-validation procedure was used to achieve the findings. The research was carried out in a Python environment where the Keras package and TensorFlow2.5 module were used. The analysis was performed on an

Epileptic Seizure Detection from EEG Signal Using ANN-LSTM Model

137

Intel ® CoreTM i7—9750 H CPU @ 2.60 GHz processor with a GPU of 4 GB NVIDIA® GeForce ® GTX 1650 and 8 GB RAM.

4.2 Result Analysis Dataset was applied to ANN, LSTM, and the combined ANN-LSTM model. The accuracy of three distinct models was displayed in Fig. 5. This result shows that the combined ANN-LSTM model outperforms the individual ANN and LSTM models in terms of accuracy. The confusion matrix of the test phase of the competing ANN, LSTM, and suggested ANN-LSTM architecture for epileptic illness classification is shown in Fig. 6. The ANN and LSTM architectures misclassified 141 and 194 of the 3680 instances, respectively. Meanwhile, the suggested ANN-LSTM architecture misclassified just 20 instances. The suggested ANN-LSTM network outperforms the ANN and LSTM networks in terms of true positive and true negative values, as well as false negative and false positive values. As a result, the proposed approach can accurately diagnose epileptic seizure episodes based on EEG data. The performance evaluation of this study includes accuracy, precision, sensitivity, specificity, and f-1 score as the standard metrics. In the ANN-LSTM model, the values are 99.14%, 99.46, 99.78, and 99.13 for precision, f-1 score, sensitivity, and specificity respectively, which are better than the performance of the ANN and LSTM model represented in Table 3. Furthermore, throughout the training and validation phases, Fig. 7 visually depicts the progress of the ANN-LSTM classifier in terms of accuracy and loss. At epoch 50, the training and validation accuracies are 99.7 and 99.5%, respectively. Similarly, the ANN-LSTM model’s training and validation losses are 0.0038 and 0.0072, respectively. Fig. 5 Comparing the accuracy of ANN, LSTM, and ANN-LSTM model

ANN

LSTM

97.17%

94.73%

ANN-LSTM

99.46%

90.00% 92.00% 94.00% 96.00% 98.00% 100.00%

Accuracy

138

R. Islam et al.

Fig. 6 The confusion matrix of the a ANN, b LSTM, and c ANN-LSTM model

Table 3 Performance of the ANN, LSTM, and ANN-LSTM model Model

Precision (%)

f-1 Score (%)

Sensitivity (%)

Specificity (%)

ANN LSTM

95.79

96.18

96.57

95.76

94.03

94.75

95.47

93.99

ANN-LSTM

99.14

99.46

99.78

99.13

We compared the performance and outcomes of our proposed strategy to a number of current methods that were thoroughly investigated in the literature study, as shown in Table 4. Our approach outperforms the current methods in the literature.

Epileptic Seizure Detection from EEG Signal Using ANN-LSTM Model

139

Fig. 7 Progress in training and validation of ANN-LSTM model in terms of a accuracy and b loss

Table 4 Compare accuracy of the proposed model with existing models

References

Model

Accuracy (%)

Lasefr et al. [8]

DWT-ANN

98.1

Fani et al. [9]

ANN

94

Hadj-Youcef et al. [10]

SVM

99

Liu et al. [11]

SVM

95.33

Juarez-Guerra et al. [12]

Feed-Forward NN

93.23

Proposed method

ANN-LSTM

99.46

140

R. Islam et al.

5 Conclusion The ANN-LSTM model is used in this work to identify and categorize epileptic seizure states using EEG data. The trained network had a classification accuracy of 99.46%. Limited data, similarities across classes, information loss during preprocessing, artifacts, and other factors contributed to a drop in performance on occasion. Future study in this field will evaluate the suggested model’s competency using various epileptic seizure datasets, and so on. This research will be turned into a hardware device that alerts epileptic patients to imminent seizures and allows them to avoid potentially risky situations in the future. Automated seizure detection will allow for earlier diagnosis, monitoring, and a reduction in the overall cost of epilepsy treatment. The proposed method can be used to analyze additional biomedical datasets in order to improve disease diagnosis.

References 1. Acharya UR, Vinitha Sree S, Swapna G et al (2013) Automated EEG analysis of epilepsy: a review. Knowl Based Syst 45:147–165. https://doi.org/10.1016/j.knosys.2013.02.014 2. Veisi I, Pariz N, Karimpour A (2007) Fast and robust detection of epilepsy in noisy EEG signals using permutation entropy. In: 2007 IEEE 7th international symposium on bioinformatics and bioengineering. IEEE. https://doi.org/10.1109/BIBE.2007.4375565 3. World Health Organization (2006) Neurological disorders: public health challenges. World Health Organization, Genève, Switzerland. ISBN: 978-92-4-156336-9 4. Roy AD, Islam MM (2020) Detection of epileptic seizures from wavelet scalogram of EEG signal using transfer learning with AlexNet convolutional neural network. In: 2020 23rd international conference on computer and information technology (ICCIT). IEEE. https://doi.org/ 10.1109/ICCIT51783.2020.9392720 5. Chiang C-Y, Chang N-F, Chen T-C et al (2011) Seizure prediction based on classification of EEG synchronization patterns with on-line retraining and post processing scheme. In: Annual international conference of the IEEE engineering in medicine and biology society, pp 7564– 7569. https://doi.org/10.1109/IEMBS.2011.6091865 6. Fisher RS, Vickrey BG, Gibson P et al (2000) The impact of epilepsy from the patient’s perspective I Descriptions and subjective perceptions. Epilepsy Res 41:39–51. https://doi.org/ 10.1016/s0920-1211(00)00126-1 7. Bishop M, Allen CA (2003) The impact of epilepsy on quality of life: a qualitative analysis. Epilepsy Behav 4:226–233. https://doi.org/10.1016/s1525-5050(03)00111-2 8. Lasefr Z, Ayyalasomayajula SSVNR, Elleithy K (2017) Epilepsy seizure detection using EEG signals. In: 2017 IEEE 8th annual ubiquitous computing, electronics and mobile communication conference (UEMCON). IEEE. https://doi.org/10.1109/UEMCON.2017.8249018 9. Fani M, Azemi G (2011) Automatic epilepsy detection using the instantaneous frequency and sub-band energies of the EEG signals. In: 2011 19th Iranian conference on electrical engineering (ICEE), pp 1–5. https://doi.org/10.1109/WOSSPA.2011.5931447 10. Hadj-Youcef MA, Adnane M, Bousbia-Salah A (2013) Detection of epileptics during seizure free periods. In: 2013 8th international workshop on systems, signal processing and their applications (WoSSPA). IEEE. https://doi.org/10.1109/WoSSPA.2013.6602363 11. Liu Y, Zhou W, Yuan Q, Chen S (2012) Automatic seizure detection using wavelet transform and SVM in long-term intracranial EEG. IEEE Trans Neural Syst Rehabil Eng 20:749–755. https://doi.org/10.1109/TNSRE.2012.2206054

Epileptic Seizure Detection from EEG Signal Using ANN-LSTM Model

141

12. Juárez-Guerra E, Alarcon-Aquino V, Gómez-Gil P (2015) Epilepsy seizure detection in EEG signals using wavelet transforms and neural networks. In: Lecture notes in electrical engineering. Springer International Publishing, Cham, pp 261–269. https://doi.org/10.1007/978-3319-06764-3_33 13. Andrzejak RG, Lehnertz K, Mormann F et al (2001) Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: dependence on recording region and brain state. Phys Rev E Stat Nonlin Soft Matter Phys 64:061907. https://doi.org/10. 1103/PhysRevE.64.061907 14. Agatonovic-Kustrin S, Beresford R (2000) Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research. J Pharm Biomed Anal 22:717–727. https://doi.org/10.1016/s0731-7085(99)00272-1 15. Yu Y, Si X, Hu C, Zhang J (2019) A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput 31:1235–1270. https://doi.org/10.1162/neco_a_01199 16. Palash TI, Islam R, Basak M, Dutta Roy A (2021) Automatic classification of COVID-19 from chest X-ray image using convolutional neural network. In: 2021 5th international conference on electrical information and communication technology (EICT). IEEE. https://doi.org/10.1109/ EICT54103.2021.9733477 17. Islam R, Debnath S, Palash TI (2021) Predictive analysis for risk of stroke using machine learning techniques. In: 2021 international conference on computer, communication, chemical, materials and electronic engineering (IC4ME2). IEEE. https://doi.org/10.1109/IC4ME253898. 2021.9768524 18. Raen R, Islam MM, Islam R (2022) Diagnosis of retinal diseases by classifying lesions in retinal layers using a modified ResNet architecture. In: 2022 international conference on advancement in electrical and electronic engineering (ICAEEE). IEEE. https://doi.org/10.1109/ICA EEE54957.2022.9836427 19. Ali MH, Afrin S, Islam R, Islam MR (2022) Detection of COVID-19 infected lungs from chest x-ray using wavelet transform, HOG features extraction and SVM. Global J Eng Technol Adv 13:001–011. https://doi.org/10.30574/gjeta.2022.13.2.0086

Cognitive Assessment and Trading Performance Correlations J. Eduardo Lugo and Jocelyn Faubert

Abstract Neuroeconomics and behavioral finance have provided insight on how cognitive processes and emotions combine to influence financial decisions. In trading decision-making, cognitive assessment and its possible increase through training should be better understood. In this preliminary validation investigation, we employed NeuroTracker (3D multiple object tracking or 3D-MOT), a technique extensively used to test and train cognitive processes in performance populations, to investigate whether the metrics on this task relate to trading performance. The findings demonstrate that there are strong relationships between trading metrics and NeuroTracker scores. Keywords 3D-MOT · NeuroTracker · Trading · Neuroeconomics · Finance

1 Introduction There is a case to be made that great sportsmen and gamers share certain mental performance-related challenges with traders. For instance, they must process a lot of information at once, which can lead to cognitive overload, and they may experience performance anxiety, which is known to impair cognitive function [1]. According to the attentional control theory, anxiety decreases the effectiveness of the goal-directed attentional system and increases the extent to which processing is influenced by the stimulus-driven attentional system [2, 3]. When people complete many tasks at once, this impairment may get worse. Numerous studies have shown that performing many activities at once limits a person’s ability to understand information [4]. According to Kahneman’s [5] theory, this form of “limited attention” enables people to divide their cognitive resources across many activities. As a result, the amount of attention given to one activity must be less than the amount of attention given to other tasks. J. Eduardo Lugo (B) · J. Faubert Faubert Lab, School of Optometry, Université de Montréal, Montreal, Canada e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_11

143

144

J. Eduardo Lugo and J. Faubert

The knowledge that investors may obtain is the focus of earlier research on attention in finance literature. Merton [6], for instance, examined market equilibrium in a situation where investors only had a vague understanding of the securities market. The effects of attention assignation on financial markets and investor attitudes have since been the subject of several theoretical and observational investigations. Peng [7] demonstrates how investors might focus their limited attention on sources of uncertainty to reduce total portfolio uncertainty. According to Peng and Xiong [8], investors with limited attention will utilize fundamental decision-making concepts such as categorization, and these behaviors will clarify well-documented changes in covariating asset returns. Huberman [9], Huberman and Regev [10], Barber and Odean [11] are compatible with these hypotheses, implying that investors prefer known stocks or that information may not be incorporated in pricing until it piques investors’ attention. While these studies suggest that there is a lack of attention, accurate assessments are scarce since attention and its relationship to financial markets are difficult to define. Corwin, et al. [4] investigated whether the ability of a specialist to sell shares on the New York Stock Exchange (NYSE) is connected to limited attention. The Hypothesis of Limited Attention is founded on the notion that individual experts suffer time and processing constraints that limit their capacity to follow and manage several requests at the same time, particularly during peak business hours [4]. The most intriguing conclusion was the negative association between a specialist’s level of attention to all stocks and the trading operation and the absolute returns of all other stocks in the specialist’s portfolio. The data also show that during times of heightened operation, experts focus their efforts on their most productive stocks, resulting in fewer regular market price movements and higher transaction costs for the remaining assigned equities. As a result of limited attention capacity, the attempt distribution has a direct influence on the liquidity provision of financial securities markets. There is growing evidence that 3D multiple object tracking (3D-MOT) is a powerful and sensitive metric for assessing cognitive-attentional capacity in a variety of populations [12–17]. Tullo et al. [12], for example, studied the resource constraints for dynamic visual attention via growth utilizing a 3D-MOT training interface. According to the findings, perceptual-cognitive functions vary with age, and 3DMOT may be utilized to characterize resource allocation in attentional processes. Flight, independent of where to practice (simulator or jet), is a cognitively hard task, according to the workload evaluation utilizing perceptual-cognitive approaches. When the 3D-MOT scores at rest were compared to the 3D-MOT scores when the evaluation pilots were flying, the 3D-MOT speed thresholds were dramatically reduced (by an average of 97%). When compared to the simulator arrangement, the results showed that both 3D-MOT and physiological performance were more influenced during a real flight in the aircraft. As a result, it is possible that the cognitive burden in live flight is much higher. These findings confirm the assumption that physiological and environmental noise are greater in real-world settings than in laboratory settings [18], and that brain dynamics fluctuate more in real-world settings than in laboratory settings [19].

Cognitive Assessment and Trading Performance Correlations

145

The foregoing justifications make it obvious that 3D-MOT is a possible instrument for testing and honing cognitive abilities that are important to traders. Correlating 3D-MOT performance with a secondary job, such as trading performance, would be a first step in this approach. This allows traders to be alerted to any cognitive changes brought on by stress, mental tiredness, or other causes before making poor choices that might cost them money. In the current study, 29 traders’ trading metrics and 3D-MOT scores were compared to suggest a novel paradigm for training and testing cognitive function. The findings indicate significant positive or negative associations with trading metrics like total profits or maximum drawdown. This work is organized as follows: the materials and methods utilized in this work are presented in Sect. 2. We describe our sample selection, the equipment, the technique, the variables of interest, and the statistical approach. Section 3 presents the results and Sect. 4 explores our empirical findings and further analysis, while Sect. 5 provides closing thoughts.

2 Materials and Methods 2.1 Participants NeuroStreet Trading Academy selected 29 traders (all men) aged 35–65 years old (Mean SD = 50 ± 15) for this study. Because this was a retail trader research, only unlicensed independent traders who trade their accounts participated. The data for this study spans from the 26th of Sept 2019–to the 4th Aug 2020. All participants consented to participate in this study and signed a written informed permission to participate. The study followed the principles of the Helsinki Declaration (last modified, 2004).

2.2 Apparatus The NeuroTracker cloud version of the 3D-MOT training program was used in this study (CogniSens Athletics, Inc., Montreal, Quebec, Canada), where the task is to track a specific number of target spheres out of a total of 8 that are observed on a screen, through an expanded 3D virtual visual field [20, 21]. Almost every trader was using the Ninjatrader Trading Platform (Denver, Colorado, USA). Other platforms, such as TradeStation (Plantation, Florida, USA) or Tradingview, were occasionally utilized by traders, however, the platform they traded from had no influence on this study because traders trade from a variety of platforms. The majority of the trading was done in a simulator mode; however, some was done on actual accounts.

146 Table 1 Relationship between screen size and sit-down distance from the screen

J. Eduardo Lugo and J. Faubert

Screen size (inch)

Distance from screen (cm)

65

173

60

160

55

147

50

133

45

120

40

107

35

93

30

80

25

67

20

54

15

40

2.3 Procedure The study was conducted fully remotely, with participants training using NeuroTracker without supervision. NeuroStreet Trading Academy recommended that all users train either before the trade (typically around 8 a.m. EST) or after the market closes (4 p.m. EST), with just one session done each day. We strongly advised participants to become acquainted with the training program setting and stimuli before to assessment. They were then invited to put on the anaglyph goggles, which allowed them to see the stimuli in 3D. In all training sessions, we instructed traders to sit facing the center of the screen at 44–50” above the ground. To accommodate varying heights, we highly advised the traders to utilize a chair with a seat height adjusting mechanism. The distance from the screen should be changed according to Table 1 depending on the screen size (diagonal). Traders were instructed to focus their attention on the fixation point, which was presented as a green dot straight ahead in the middle of the screen. Traders were free to begin trading after completing their 3D-MOT measurement. At the end of their trading trip, the traders electronically reported their trading data each day, indicating whether the data came from a simulator or an actual account and providing their NeuroTracker ratings.

2.4 Measures Session 3D-MOT speed thresholds were employed to measure changes in cognitive behavior between traders. Total Net Profit, Max Drawdown, were used to analyze trading performance (see Table 2). Trading is an inherently complex endeavor, and

Cognitive Assessment and Trading Performance Correlations

147

Table 2 The analyzed measurements and the units in which they were recorded are defined. m represents meters, s represents seconds, and USD represents US dollars Measure

Unit

Description

NeuroTracker Speed threshold

m/s

At the end of each NeuroTracker session, which consists of 20 trials, a speed threshold is determined

Total net profit

USD

The profit made by a corporation or person from their investments before taxes are deducted

Max drawdown

USD

Is the largest observed loss from a portfolio’s peak to trough before a new peak is reached. Maximum drawdown is a measure of downside risk over a certain time period

differences in trading competence cannot be fully comprehended solely on a single combination of metrics.

2.5 Statistical Analysis The association between the 3D-MOT threshold scores and the two trading variables (total net profit, and max drawdown), was initially studied using bivariate correlations using the Pearson technique. All of the variables in our data defied the assumption of normality, according to a preliminary analysis of the data. For all of these variables, non-parametric Spearman correlations were performed instead.

3 Results 3.1 3D-MOT and Trading Scores Correlations In all, 29 Traders and 624 repeated trading observations are used in this research. These 624 observations will be referred to as Base 1 in the following. Table 3 below presents the summary statistics for this dataset (Base1). Kolmogorov–Smirnov and Shapiro–Wilk tests for normality were performed. We may reject the null hypothesis that our data is not normal if the significance value Table 3 The consolidated data for 29 dealers Score

N

Minimum 1st Median 3rd Maximum quartile quartile

3D-MOT

624

0.01 0.76

2.69

5.13

TotalNetProfit

624

–630.00 – 0.52

156.25

842.37

131,367.64

MaxDrawdown 624 –2,329.72 – 775

– 150

0.00

1.19

Mean 1.70

Std. deviation 1.44

3500.10 12,683.41

0.00 –1010.28

2241.41

148

J. Eduardo Lugo and J. Faubert

Table 4 Spearman’s correlations results

Score Score 3D-MOT

TotalNetProfit

MaxDrawdown

3D-MOT Correlation coefficient

1.000

Sig. (2-tailed)



N

624

Correlation coefficient

0.487**

Sig. (2-tailed)

0.000

N

624

Correlation coefficient

−0.323**

Sig. (2-tailed)

0.000

N

624

The significance of asterisk in the **p 0.5, the label of x is predicted as 1. Otherwise, it is predicted as 0. Though we used a number of features of a completely different nature, there is no need for weighting individual features based on domain-specific knowledge. This is because such weights are already included in the model as its model parameters. Furthermore, they are to be learned in a task-driven manner during the training process. In practice, a regularised version of the logistic regression model is adopted for classification. The strength of regularisation is controlled by a regularisation parameter C. Smaller C values specify stronger regularisation. First, the values of C have a direct impact on w. Then the magnitude of w has an impact on whether the classification focuses on the boundary region between the classes or not, Therefore, it is necessary to tune this hyper-parameter.

Logistic Regression Approach to a Joint Classification and Feature …

197

As we adopted an under-sampling approach in an ensemble setting for balanced classification, we need to split the training set into a training set for hyper-parameter tuning and a validation set accordingly. As in the testing sets, the validation set also includes 500 controls and 500 cancer cases. This results in ensemble training datasets of 605146 controls and 3477 cancer cases. Similarly, we randomly sample 3477 controls from 605146 controls without replacement and combine these with the 3477 cancer cases, which constitutes a balanced ensemble hyper-parameter tuning set. Note that once the hyper-parameter is determined, we use the original ensemble training sets to re-train individual ensemble classifiers.

2.5 Feature Selection Feature selection [14, 15] is primarily a dimensional reduction technique. Moreover, according to some criteria, the selected features are more important than others. More importantly, unlike projection-based dimension reduction, feature selection results remain interpretable. This can be helpful in the generation of insights into the underlying processes of health and diseases. Further, supervised feature selection is driven by a task which can be designed for prognostic or diagnostic purposes. In the case of diagnostic purposes, the selected features are expected to be the most discriminative between control and disease. Logistic regression provides a natural way to perform supervised feature selection. First, it is because LR is a supervised machine learning model; Second, each component of model parameter vector w is multiplied with one of the feature variables. Suppose this component is vanishingly small as a result of the training process. In that case, the corresponding feature variable is no longer important as its value has a vanishingly small impact on the response variable. It is also worth noting that the feature variables are usually normalised or standardised, the weight parameters are thus comparable across all feature variables. Due to class imbalance, we implemented the classification in an ensemble setting. This results in a large number of ensemble classifiers. Each of these LR classifiers gives us a set of trained weight parameters. We can plot a histogram for those weight parameters corresponding to a particular feature. Through comparing these histograms, the most important feature variables can be identified. However, some of those ensemble classifiers could be poorly trained. Thus, their trained weight parameters could be meaningless. When computing the histograms, the feature selection performance could be hampered if they are included. To avoid this, we pool all classification accuracies of individual ensemble classifiers together and determine a threshold for a particular percentile, for example, the top 20% percentile. Following this, we only use those weight parameters generated from the ensemble classifiers that have their accuracy above the threshold.

198

Y. Shen et al.

3 Experiment and Results 3.1 Exploratory Data Analysis We first report the ‘missing value’—problem in our analysis [27]. As reported early, seven feature variables have been extracted and engineered for the classification task. This cohort included 606126 controls and 4477 lung cancer cases. When we further engineered smk_quit_time—an interesting lifestyle factor—the cohort is reduced to the one with only 13035 controls and 202 lung cancer cases. This is because the inclusion of a new CPRD variable may turn many existing ‘missing value’free data items into data items with ‘missing value’. Therefore, we need to strike a balance between including a sufficiently large number of CPRD variables and having a sufficiently large cohort for both majority and minority classes. Recall that cohort 3 in Table 2 is the one used for the classification task. Compared to cohort 1, cohort 2 has one feature variable less, and cohort 3 has two feature variables less. On the other hand, cohort 1 is the largest, while cohort 3 is the smallest. For each of the three cohorts and each feature variable, the false rates of Type 1 error are given, and the threshold for statistical significance is set to 0.05. We observed that statistically significant discrimination between control and lung cancer cases is evident in one comorbidity feature variable (copd) and all three smoking-related feature variables. In contrast, the false rate is significantly larger than the threshold for the demographic feature variables as well as for bmi—a lifestyle variable. More importantly, this observation is valid across the three cohorts. Further, the observation is also confirmed by the contingency table analysis and visualisation. In each panel of Fig. 2, there are two bars (either horizontal or vertical). The left or top bar is associated with controls, and the right or bottom one with cancer cases. Each bar shows how the values of its corresponding feature variable under the corresponding condition vary in coded colours. Here, we can observe that the colour patterns are clearly different between the two conditions for all four panels.

Table 2 A list of feature variables with statistically significant discrimination power between control and lung cancer cases Feature variable cohort 1 cohort 2 cohort 3 personalhcancer familyhcacer bmi copd smk_duration status_currentsmk smk_intensity

0.39 0.94 0.66 0.00 0.00 – –

0.74 0.97 0.85 0.00 0.00 0.00 –

1.00 0.96 0.89 0.00 0.00 0.00 0.01

Logistic Regression Approach to a Joint Classification and Feature …

199

Fig. 2 Visualisation of a contingency table for contrasting control (0) and lung cancer case (1) using the following feature variable: Top-left: COPD—left vertical bar for controls and right for cases; Red for yes COPD and Green for no COPD; Top-right: smk_current_status—left vertical bar for controls and right for cases; Yellow for yes smk_current_status and Green for no smk_current_status; Bottom-left: smk_duration—top horizontal bar for controls and bottom for cases, Bottom-right: smk_intensity—top horizontal bar for controls and bottom for cases

3.2 Classification and Feature Selection Analysis We first report the hyper-parameter tuning results. Figure 3 shows that the optimal C value is obtained at C = 0.05 and the mean accuracy is about 74%. As we repeat random splitting between training and testing datasets randomly and independently 100 times, we thus obtained 100 accuracies at each C and computed their means and statistical errors. Next, we report classification results. The left panel of Fig. 4 showed the median estimate of testing accuracy, which is about 74.5%. Further, the accuracy varies from 70 to 78% and this variability is estimated by using 100 times random split of training and testing sets. The right panel shows the confusion table, which tells us that its sensitivity and specificity are about 84 and 64%, respectively. The statistical deviation of sensitivity and specificity is about 0.1%. Our results are comparable with those reported in [54]. They reported 82% sensitivity and 62% specificity. Both studies used logistic regression as the classifier. Also, similar feature variables are chosen by both studies for the predictive task. The major difference between them is the fact that the data used by [54] were generated by (1) National Lung Screening

200

Y. Shen et al.

Fig. 3 Mean classification accuracy as a function of regularisation parameter C. Error bar indicates the statistical errors at each C

Trial and (2) PLCO Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial. In contrast, we used CPRD data. Figure 5 showed that only copd and status_currentsmk are associated with significantly higher values of weight parameters, which are in the range between 1.2 and 1.4. Further, it is noted that the weight parameters of smk_duration are narrowly distributed over its peak at 0.3. When compared to personalhcancer, familyhcacer and bmi, smk_duration is far away from zero weight with statistical significance. In summary, comorbidity variable copd and the two smoking-related variables (smk_duration and status_currentsmk) are the three most important features with respect to the classification task. In traditional supervised feature selection, there exists a so-called exhaustive approach [19]. That is, all possible combinations of feature variables are evaluated using a classification model. The best feature set is the one with the highest testing accuracy. Clearly, this is a computationally very expensive approach. Now, we consider only the following four features: copd, smk_duration, status_currentsmk and smk_intensity. Further, we keep the comorbidity variable while choosing two of the three smoking-related features for the classification task. Figure 6 showed that the combination of copd, smk_duration and

Fig. 4 Left: the boxplot of testing accuracy; Right: the confusion table

Logistic Regression Approach to a Joint Classification and Feature …

201

Fig. 5 Histograms of the weight parameters corresponding to the feature variables used in the classification model. Top: personalhcancer, familyhcacer, bmi (from left to right); Bottom: copd, status_currentsmk, smk_duration (from left to right). That of smk_intensity is skipped over as its corresponding weights are vanishingly small

status_currentsmk yields the best performance as we obtained from the feature selection analysis above. Figure 5 showed that only copd and status_currentsmk are associated with significantly higher values of weight parameters, which are in the range between 1.2 and 1.4. Further, it is noted that the weight parameters of smk_duration are narrowly distributed over its peak at 0.3. When compared to personalhcancer, familyhcacer and bmi, smk_duration is far away from zero weight with statistical significance. In summary, comorbidity variable copd and the two smoking-related variables (smk_duration and status_currentsmk) are the three most important features with respect to the classification task. The results of statistical tests in Tabel 3 further highlighted that copd and status_currentsmk are the two features which are both more important than smk_duration with statistical significance. Moreover, the difference between their importance levels is statistically not significant.

4 Discussion and Conclusion In this paper, we first engineered a small set of feature variables using the data extracted from CPRD. A comparative study in the literature guides the choice of CPRD variables for feature engineering. We then validated this dataset using exploratory data analysis, which showed that some feature variables of our choice

202

Y. Shen et al.

Fig. 6 Comparison of testing accuracies between three combinations of copd, smk_duration, status_currentsmk and smk_intensity

Table 3 Results from two-sample one-sided t-test on the hypothesis that the important measure of one feature from the first column (e.g. copd)is higher than that of another feature from the first row (e.g. smk_durat which is equivalent to status_currentsmk. The p-value is 0.02, which is small than the threshold 0.05. However, we had in total performed 42 statistical tests. Thus, the threshold has to be adjusted by the Bonferroni rule and is reduced to 0.001. As a result, it is statistically not significant that feature copd is more important than feature status_currentsmk phcancer

fhcancer

bmi

copd

personalhcancer



1.00

1.00

1.00

status_smk smk_durat smk_intens 1.00

1.00

0.00

familyhcacer

0.00



1.00

1.00

1.00

1.00

0.00

bmi

0.00

0.00



1.00

1.00

1.00

0.00

copd

0.00

0.00

0.00



0.02

0.00

0.00

status_currentsmk

0.00

0.00

0.00

0.98



0.00

0.00

smk_duration

0.00

0.00

0.00

1.00

1.00



0.00

smk_intensity

1.00

1.00

1.00

1.00

1.00

1.00



Logistic Regression Approach to a Joint Classification and Feature …

203

had shown discrimination power between control and lung cancer cases. After that, we performed a classification task using the logistic regression model. It is demonstrated that we achieved slightly better performance than the comparative study in terms of testing accuracy, sensitivity and specificity. It is worthy of noting that the data used in that study were collected through two screening trials while we used CPRD data, Finally, we performed supervised feature selection and have identified copd, smk_duration and status_currentsmk as three most important feature variables. A controlled and exhaustive feature selection approach also confirms this observation. It is worth noting the limitation of this study. Feature status_currentsmk has only two categorical values: current smoker or never smoker. This is because we dropped all data items with status_currentsmk = former smoker. Feature bmi is also a binary variable taking either underweight or others as its values. In fact, so-called ‘others’ includes normal weight, overweight and obese. Thus, bmi should be considered a feature variable with four categorical values. Therefore, improving feature engineering with CPRD variables is crucial to achieving better predictive performance. Acknowledgements Financial support from Nottingham Trent University through the Medical Technologies and Advanced Materials Strategic Research Theme is acknowledged.

References 1. Adiba FI, Islam T, Kaiser MS, Mahmud M, Rahman MA (2020) Effect of Corpora on Classification of Fake News using Naive Bayes Classifier. Int J Autom Artif Intell Mach Learn 1(1):80–92. https://researchlakejournals.com/index.php/AAIML/article/view/45, number: 1 2. Ahmed S, Hossain M, Nur SB, Shamim Kaiser M, Mahmud M et al (2022) Toward machine learning-based psychological assessment of autism spectrum disorders in school and community. In: Proceedings of TEHI, pp 139–149 3. Ahmed S et al (2021) Artificial intelligence and machine learning for ensuring security in smart cities. In: Data-driven mining, learning and analytics for secured smart cities, pp 23–47 4. Akhund NU et al (2018) Adeptness: Alzheimer’s disease patient management system using pervasive sensors-early prototype and preliminary results. In: Proceedings of brain informatics, pp 413–422 5. Al Banna M, Ghosh T, Taher KA, Kaiser MS, Mahmud M et al (2020) A monitoring system for patients of autism spectrum disorder using artificial intelligence. In: Proceedings of brain informatics, pp 251–262 6. AlArjani A et al (2022) Application of mathematical modeling in prediction of covid-19 transmission dynamics. Arab J Sci Eng 1–24 7. Bhapkar HR et al (2021) Rough sets in covid-19 to predict symptomatic cases. In: COVID-19: prediction, decision-making, and its impacts, pp 57–68 8. Biswas M, Kaiser MS, Mahmud M, Al Mamun S, Hossain M, Rahman MA et al (2021) An xai based autism detection: the context behind the detection. In: Proceedings of brain informatics, pp 448–459 9. Biswas M et al (2021) Accu3rate: a mobile health application rating scale based on user reviews. PloS one 16(12):e0258050 10. Biswas M et al (2021) Indoor navigation support system for patients with neurodegenerative diseases. In: Proceedings of brain informatics, pp 411–422 11. van Meerbeeck JP, Franck C (2021) Deep learning delivers early detection. Transl Lung Cancer Res 3(11):e442

204

Y. Shen et al.

12. Cassidy A, Myles J, van Tongeren M (2008) The llp risk model: an individual risk prediction model for lung cancer. Brit J Cancer 98 13. Das S, Yasmin MR, Arefin M, Taher KA, Uddin MN, Rahman MA (2021) Mixed BanglaEnglish spoken digit classification using convolutional neural network. In: Mahmud M, Kaiser MS, Kasabov N, Iftekharuddin K, Zhong N (eds) Applied intelligence and informatics. Communications in computer and information science. Springer International Publishing, Cham, pp 371–383. https://doi.org/10.1007/978-3-030-82269-9_29 14. Das TR, Hasan S, Sarwar SM, Das JK, Rahman MA (2021) Facial spoof detection using support vector machine. In: Kaiser MS, Bandyopadhyay A, Mahmud M, Ray K (eds) Proceedings of TCCE. Advances in intelligent systems and computing. Springer, Singapore, pp 615–625 (2021). https://doi.org/10.1007/978-981-33-4673-4_50 15. Dhal P, Azad C (2022) A comprehensive survey on feature selection in the various fields of machine learning. Appl Intell 52:4543–4581 16. Dreiseitl S, Ohno-Machado L (2002) Logistic regression and artificial neural network classification models: a methodology review. J Biomed Inf 35(5–6):352–359 17. Farhin F, Kaiser MS, Mahmud M (2020) Towards secured service provisioning for the internet of healthcare things. In: Proceedings of AICT, pp 1–6 18. Farhin F, Kaiser MS, Mahmud M (2021) Secured smart healthcare system: blockchain and bayesian inference based approach. In: Proceedings of TCCE, pp 455–465 19. Galatenko V, Tonevitsky A (2015) Highly informative marker sets consisting of genes with low individual degree of differential expression. Sci Rep 5 20. Ghosh T, Al Banna MH, Rahman MS, Kaiser MS, Mahmud M, Hosen AS, Cho GH (2021) Artificial intelligence and internet of things in screening and management of autism spectrum disorder. Sustain Cities Soc 74:103189 21. Herrett E, Smeeth L (2006) Data resource profile: clinical practice research datalink (CPRD). Int J Epidemiol 3(11):e442 22. Islam N et al (2021) Towards machine learning based intrusion detection in IoT networks. Comput Mater Contin 69(2):1801–1821 23. Iyen-Omofoman B, Tata L, Baldwin DR, Smith C, Hubbard RB (2013) Using sociodemographic and early clinical features in general practice to identify people with lung cancer earlier. Thorax 68 24. Jesmin S, Kaiser MS, Mahmud M (2020) Artificial and internet of healthcare things based alzheimer care during covid 19. In: Proceedings of brain informatics, pp 263–274 25. Junfei C, Qingfeng W, Huailin D (2010) An empirical study on ensemble selection for classimbalance data sets. In: 2010 5th international conference on computer science & education pp 477–480 26. Kaiser MS et al (2021) 6g access network for intelligent internet of healthcare things: opportunity, challenges, and research directions. In: Proceedings of TCCE, pp 317–328 27. Kim J, Curry J (1977) The treatment of missing data in multivariate analysis. SAGE J 6 28. Kumar S et al (2021) Forecasting major impacts of covid-19 pandemic on country-driven sectors: challenges, lessons, and future roadmap. Pers Ubiquitous Comput 1–24 29. Liu XY, Wu J, Zhou ZH (2008) Exploratory undersampling for class-imbalance learning. IEEE Trans Syst Man Cybernet Part B (Cybernet) 39(2):539–550 30. Mahmud M, Kaiser MS (2021) Machine learning in fighting pandemics: a covid-19 case study. In: COVID-19: prediction, decision-making, and its impacts, pp 77–81 31. Mahmud M, Kaiser MS, McGinnity TM, Hussain A (2021) Deep learning in mining biological data. Cognit Comput 13(1):1–33 32. Mahmud M, Kaiser MS, Rahman MM, Rahman MA, Shabut A, Al-Mamun S, Hussain A (2018) A brain-inspired trust management model to assure security in a cloud based IoT framework for neuroscience applications. Cogni Comput 10(5):864–873 33. Mahmud M, Kaiser MS, Rahman MA (2022) Towards explainable and privacy-preserving artificial intelligence for personalisation in autism spectrum disorder. In: Antona M, Stephanidis C (eds) Universal access in human-computer interaction. User and context diversity. Lecture notes in computer science. Springer International Publishing, Cham, pp 356–370. https://doi. org/10.1007/978-3-031-05039-8_26

Logistic Regression Approach to a Joint Classification and Feature …

205

34. Mahmud M, Kaiser MS, Hussain A, Vassanelli S (2018) Applications of deep learning and reinforcement learning to biological data. IEEE Trans Neural Netw Learn Syst 29(6):2063– 2079 35. Mahmud M et al (2022) Towards explainable and privacy-preserving artificial intelligence for personalisation in autism spectrum disorder. In: Proceedings of HCII, pp 356–370 36. Nahiduzzaman M et al (2020) Machine learning based early fall detection for elderly people with neurological disorder using multimodal data fusion. In: Proceedings of brain informatics, pp 204–214 37. Nasrin F, Ahmed NI, Rahman MA (2021) Auditory attention state decoding for the quiet and hypothetical environment: a comparison between bLSTM and SVM. In: Kaiser MS, Bandyopadhyay A, Mahmud M, Ray K (eds) Proceedings of TCCE. Advances in intelligent systems and computing. Springer, Singapore, pp 291–301. https://doi.org/10.1007/978-981-33-46734_23 38. Nawar A, Toma NT, Al Mamun S, Kaiser MS, Mahmud M, Rahman MA (2021) Cross-content recommendation between movie and book using machine learning. In: 2021 IEEE 15th international conference on application of information and communication technologies (AICT), pp 1–6. https://doi.org/10.1109/AICT52784.2021.9620432 39. Noor MBT, Zenia NZ, Kaiser MS, Mamun SA, Mahmud M (2020) Application of deep learning in detecting neurological disorders from magnetic resonance images: a survey on the detection of alzheimer’s disease, parkinson’s disease and schizophrenia. Brain Inf 7(1):1–21 40. Paci E, Puliti D, Pegna AL, Carrozzi L, Picozzi G, Falaschi F, Pistelli F, Aquilini F, Ocello C, Zappa M et al (2017) Mortality, survival and incidence rates in the italung randomised lung cancer screening trial. Thorax 72(9):825–831 41. Paul A et al (2022) Inverted bell-curve-based ensemble of deep learning models for detection of covid-19 from chest x-rays. Neural Comput Appl 1–15 42. Prakash N et al (2021) Deep transfer learning for covid-19 detection and infection localization with superpixel based segmentation. Sustain Cities Soc 75:103252 43. Rabby,G, Azad S, Mahmud M, Zamli KZ, Rahman MM (2020) TeKET: a tree-based unsupervised keyphrase extraction technique. Cognit Comput. https://doi.org/10.1007/s12559-01909706-3 44. Rabby G et al (2018) A flexible keyphrase extraction technique for academic literature. Proc Comput Sci 135:553–563 45. Rahman MA, Brown DJ, Mahmud M, Shopland N, Haym N, Sumich A, Turabee ZB, Standen B, Downes D, Xing Y et al (2022) Biofeedback towards machine learning driven self-guided virtual reality exposure therapy based on arousal state detection from multimodal data 46. Rahman MA (2018) Gaussian process in computational biology: covariance functions for transcriptomics. PhD, University of Sheffield. https://etheses.whiterose.ac.uk/19460/ 47. Rahman MA, Brown DJ, Shopland N, Burton A, Mahmud M (2022) Explainable multimodal machine learning for engagement analysis by continuous performance test. In: Antona M, Stephanidis C (eds) Universal access in human-computer interaction. User and context diversity. Lecture notes in computer science. Springer International Publishing, Cham, pp 386–399 (2022). https://doi.org/10.1007/978-3-031-05039-8_28 48. Rahman MA, Brown DJ, Shopland N, Harris MC, Turabee ZB, Heym N, Sumich A, Standen B, Downes D, Xing Y, Thomas C, Haddick S, Premkumar P, Nastase S, Burton A, Lewis J, Mahmud M (2022) Towards machine learning driven self-guided virtual reality exposure therapy based on arousal state detection from multimodal data. In: Mahmud M, He J, Vassanelli S, van Zundert A, Zhong N (eds) Brain Informatics. Springer International Publishing, Cham, pp 195–209 49. Rakib AB, Rumky EA, Ashraf AJ, Hillas MM, Rahman MA (2021) Mental healthcare chatbot using sequence-to-sequence learning and bilstm. In: Mahmud M, Kaiser MS, Vassanelli S, Dai Q, Zhong N (eds) Brain Informatics. Springer International Publishing, Cham, pp 378–387 50. Sadik R, Reza ML, Al Noman A, Al Mamun S, Kaiser MS, Rahman MA (2020) Covid-19 pandemic: a comparative prediction using machine learning. Int J Autom Artif Intell Mach Learn 1(1):1–16

206

Y. Shen et al.

51. Satu MS et al (2021) Short-term prediction of covid-19 cases using machine learning models. Appl Sci 11(9):4266 52. Sumi AI et al (2018) fassert: a fuzzy assistive system for children with autism using internet of things. In: Proceedings of brain informatics, pp 403–412 53. Svoboda E (2020) Lung cancer screening in Europe: where are we in 2021? Nature 587(20):e442 54. Tammemaegi MC, Berg CD (2006) Selection criteria for lung-cancer screening. New England J Med 3(11):e442 55. Team NLSTR (2011) Reduced lung-cancer mortality with low-dose computed tomographic screening. New England J Med 365(5):395–409 56. Wadhera T, Mahmud M (2022) Brain networks in autism spectrum disorder, epilepsy and their relationship: a machine learning approach. In: Artificial intelligence in healthcare: recent applications and developments, pp 125–142 57. Wadhera T, Mahmud M (2022) Computing hierarchical complexity of the brain from electroencephalogram signals: a graph convolutional network-based approach. In: Proceedings of IJCNN, pp 1–6 58. Wadhera T, Mahmud M (2022) Influences of social learning in individual perception and decision making in people with autism: a computational approach. In: Proceedings of brain informatics, pp 50–61 59. Wadhera T, Mahmud M (2023) Brain functional network topology in autism spectrum disorder: a novel weighted hierarchical complexity metric for electroencephalogram. IEEE J Biomed Health Inf 1–8 60. Zaman S et al (2021) Security threats and artificial intelligence based countermeasures for internet of things networks: a comprehensive survey. IEEE Access 9:94668–94690

HI Applications for ADHD Children: A Case for Enhanced Visual Representations Using Novel and Adapted Guidelines Sandesh Sanjeev Phalke and Abhishek Shrivastava

Abstract An effective representational style in a health informatics (HI) application enhances the attention span of children living with ADHD (ChADHD) in a typical learning environment. Designers of HI applications for ChADHD require relevant visual design guidelines to create effective representations. However, observations, such as visual design guidelines, are often unavailable, or are present in a format unsuitable for designers. Subsequently, representations suffer from imperfections born out of either the absence of information or the misinterpretation of existing knowledge. This study sheds more light on this scenario through working with two specific groups: designers and remedial experts. We establish gaps in knowledge through a set of interviews and focus group sessions. We bring in details of concerns raised by designers, and find relevant visual guidelines and/or contradictions in the existing ones, if any. Finally, we conclude by summarizing a mix of existing (but adapted) and novel guidelines to help design appropriate representations in ChADHD applications. Keywords Health informatics · ADHD · e-content · Design guidelines

1 Introduction Health Informatics (HI), an interdisciplinary field of scientific inquiry, can positively support children living with attention deficit hyperactivity disorder (ChADHD) [1]. These children constitute a particular case with a lesser attention span impacting their ability to learn concepts. Compared with other children of the same age group ChADHD lag behind their peers and experience academic performance issues under S. S. Phalke · A. Shrivastava (B) Department of Design, Indian Institute of Technology Guwahati, Guwahati, Assam, India e-mail: [email protected] S. S. Phalke e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_16

207

208

S. S. Phalke and A. Shrivastava

the same roof [2–4]. ADHD negatively affects the academic, and social life of ChADHD as a whole. Software applications focusing on learning and play exist for children at large. However, only a few exist for ChADHD, focusing on increasing their attention span. While speculating reasons behind this observation, we wonder if we need to include some essential aspects of the design process for such HI applications, as explained by Garrett [5]. In his five planer model of elements of user experience, Garrett [5] distinguishes visual design (at plane 5) from three other design planes (at plane 4)—interface design, information design, and navigation design. Interface design refers to the design of interface elements responsible for supporting users’ interactions with the product’s functionality. Information design indicates rationale preferences in organizing and presenting information before the users for easier comprehension. Moreover, navigation design indicates decisions taken to improve users’ movement across the information architecture present in an application. Visual design, in particular, refers to the representation of text and graphic elements, including images and navigational components. As part of our review, we note that even for designers and developers with the will to design relevant HI applications for ChADHD, there is no specific and valid set of visual design guidelines. The present study addresses this gap. We propose a diverse set of existing (but adapted) and novel visual design guidelines for HI applications for ChADHD. We have organized the rest of the paper as follows. The next section describes the methodology used in the study. It is followed by a review of the existing visual design guidelines. Subsequently, we detail a distinct set of design recommendations derived from experts’ interviews and surveys. The following section talks of details of validation exercises conducted with designers. Next to continue is the description of the outcomes of validation exercises. Finally, we conclude the study with a summary of the contribution and limitations of the present study.

2 Methodology The methodology used in this study is iterative in nature. It includes (a) reviewing the literature, (b) interviewing and surveying, (c) administering the design tasks, and (d) analysis and reporting (Fig. 1).

2.1 Review of the Literature The review of the literature was carried out using the PRISMA method. It helped us to systematically screen, identify and analyze the existing visual design guidelines for HI applications for ChADHD. It began with keyword-based search using boolean operators across two databases—Scopus and Web of Sciences. Flexibility was maintained in the inclusion of articles to make sure to get all relevant resources.

HI Applications for ADHD Children: A Case for Enhanced Visual …

209

Fig. 1 Methodology

2.2 Interviewing and Surveying Method This study used a semi-structured interviewing method. The interview questions were derived from the existing Government of India’s e-content design guidelines for children living with autism and learning disabilities. These recorded interviews helped gathering guidelines deemed relevant by the experts. The interview recordings were converted into transcripts. In addition, three more research scholars, besides the interviewer, reviewed these transcripts to verify their contents with respect to the recorded data. Interview Protocol • Receive formal consents of participation and data recording from the expert over the phone. • Onboard the expert on the Microsoft Teams platform while following COVIDappropriate behavior. • Brief the expert to explain the objectives of the interview. They were specifically informed of their privilege to raise queries or seek clarification on any question at will before and during the interview. • Addressing questions and queries posed by the expert, if any.

210

S. S. Phalke and A. Shrivastava

• Debriefing session and an expression of gratitude for experts’ time. Further verification of 34 guidelines (visual design only, discovered at the end of interviews) was conducted using a follow-up survey. A standard intercoder reliability procedure was used to test the reliability of the survey questionnaire w.r.t. to the outcomes of the interview session. Three research scholars rated the questionnaire individually. Their ratings were processed to generate an intercoder reliability score. The survey, after receiving a satisfactory reliability score, was administered to a larger group of experts using Google forms. During analysis, the weights against various responses were normalized. Responses with higher weights were considered as a new visual design guideline.

2.3 Administering Design Task Method A new set of guidelines having a combination of visual design guidelines from the literature (adapted) and a new set of expert recommendations were tested with the designers. Here, designers had to redesign certain educational artifacts used by the ChADHD using the new guidelines. A total of nine designers voluntarily participated in this task. During the redesign process, one of the authors would administer the session. The author would note down the designer’s queries and problems during the task on a feedback form. Post-redesign session feedback was taken from the designers about the visual design guidelines given to the designers for the task. All the feedbacks were categorized into different clusters, and the reliability check was carried out with the help of 2 independent researchers. Protocols • Receiving formal consent of participation and data recording from the designers in person. • Briefing the designers to suggest objectives of the design task. They were specifically informed of their privilege to raise queries or seek clarification on any guideline at will during the design task and to use any design software of their comfort and choice. • Addressing questions and queries posed by the designers, if any. • Debriefing session, collecting the digitally redesigned artifacts, and an expression of gratitude for designers’ time.

2.4 Analyzing and Reporting A systematic analysis of the designer’s feedback was conducted by summarizing all the feedbacks in a single excel sheet. The concerns of the designers were objectively analyzed with the help of three other research scholars. Compiling the concerns and suggestions of the designers along with the expert recommendation a new set of

HI Applications for ADHD Children: A Case for Enhanced Visual …

211

modified visual design guidelines were formed. The compiling process included five individuals who were either the faculties at the university or research scholars. We conducted the validation of these visual design guidelines with another set of five remedial experts at their respective remedial centers, or schools.

3 Reviewing of Literature Systematic reviewing of the literature resulted in the identification of eight relevant articles which stated visual design guidelines or frameworks for designing the HI applications for ChADHD. Often designers and developers are the ones who use these visual design guidelines for the effective representation of HI applications. Teachers, educators, and guardians use these visual design guidelines for designing digital teaching content used in virtual learning sessions and study material. Also, if these visual design guidelines are not well established then the HI applications developed using these guidelines can have a counter effect on ChADHD [6–8]. Each article suggests a distinct set of representation guidelines for the HI applications, having relative similarities. McKnight [6] states 15 guidelines and recommendations based on the survey of the literature searched from different databases. These representation guidelines are proposed based on personal observations from the literature and adaptation of the software development usability guidelines. Therefore, the proposed set of recommendations is purely untested, opinion and literature-based, with a need for validation with a suitable target user group in a real-time environment. We cannot consider the recommendations by McKnight [6] as valid and foolproof. However, we consider it a primary direction study for researchers and designers to identify and propose relevant guidelines for the representation of HI applications used for ChADHD. Another researcher Sonne [9] states a design framework to develop assistive technology for ChADHD. We can consider the outcomes of these studies as reliable and effective as the outcomes are derived from relevant empirical studies based on managing ChADHD. However, these recommendations are based on the literature and personal observations having no actual formal validation from the remedial experts or testing with the target user. These recommendations may be effective under controlled circumstances. However, to know their actual impact, it is necessary to conduct a formal validation of these guidelines in a real-time environment with the target user group. Further, a few other researchers have also stated distinct frameworks and design guidelines for the designing of HI applications for the ChADHD. For example, universal design principles [9], the planning and designing process for tangible tabletops [10, 11], the designing of the engaging user flows for the self-management of ChADHD [12], and the interface design guidelines for effective interaction [5]. All these recommendations lack in highlighting the specific needs of designers or lead to the misinterpretation of the guidelines. Also, these guidelines are based on the literature or personal recommendations. A few of the design guidelines do state the

212

S. S. Phalke and A. Shrivastava

guidelines based on empirical studies, but these are applicable for a specific purpose only. An example of this type of research is work done by Sonne [13]; although the study is well validated but is only applicable to designing assistive wearables for the ChADHD. However distinct these studies might be, the systematic study of these guidelines is very necessary before identifying and stating the new representation guidelines for the design of the HI application. Thus, we identify these relevant existing guidelines and adopt them in our research to validate them. A summary of recommendations from the eight selected articles can be seen in Table 1. All guidelines and suggestions from these articles are application-specific, suggestion-based, vague, and literature-based personal observations of the authors. These guidelines fail to state the directions for the representation of HI applications used for ChADHD. However, we adopt the suggestions from these articles and validate them with the designers and experts.

4 Interviewing and Surveying The interviewing phase had two remedial experts and the average time span of interviews was 75 min. Each session of interviews was transcribed into a word document. The intercoder reliability of the first interview transcript was 86.3 and of second interview session was 87.6. Figure 2 depicts the two virtual interview sessions. Figure 2(left) is the screenshot of the interview with the first expert, who is a child psychologist. This session lasted for 71 min. The following Fig. 2(right) is the screenshot of the interview with the second expert having expertise in remedial education and behavioral therapy. This session lasted for 79 min. The first interview resulted in 47 recommendations, and the second interview resulted in 53 recommendations. The interview outcomes were then converted into a Google form for validation with another set of eight experts. The reliability of the survey questionnaire is as high as 96.6%. All eight experts have different relevant backgrounds and field expertise. Among these experts, two experts were heads of two remedial centers with an experience of ten plus years, four remedial teachers with a field experience of five plus years, and two regular school teachers who have knowledge and experience of remedial teaching for the past seven-plus years. Therefore, all the experts had relevant knowledge and experience to report the survey questions. The similarity of recommendations of guidelines by these experts is about 95%. The recommendations obtained from the experts are updating the guidelines from the literature and adding a few more experts’ recommendations as a new valid set of visual design guidelines. This new set of guidelines is classified across eight distinct categories, which will assist the designers in building practical educational HI applications. The reliability of this classification is 95%. Table 2 represents the seven distinct categories and the subsequent recommendations obtained from the experts. These new set of guidelines are expert validated to be utilized for the design of HI applications for ChADHD. However, it is important to test the usage of these guidelines by the designers. This will help us understand the necessary modifications

Paper details

Designing an interactive learning application for ADHD Children. [14]

Sr. no

1

Recommendations

Recommendations to design effective user flow and use of • Use challenges-based interactions child-centered design. Child-centered study. Through feedbacks from to prevent the boredom of the children after using the learning application. children. • Use a gamified flow of the application. • Reward for passing each level to maintain the engagement of the children. • Use interactive auditory feedback system to report the success and failure of the task. (continued)

Areas of recommendations and its basis

Table 1 Design recommendations from literature

HI Applications for ADHD Children: A Case for Enhanced Visual … 213

Paper details

Designing for ADHD: in search of guidelines. [6]

Sr. no

2

Table 1 (continued)

Effective representation guidelines for designing of HI applications for the ChADHD and guidelines to design effective user flow of such applications. Recommendations based upon the survey of literature and personal observations only that closely linked to the usability principles.

Areas of recommendations and its basis

(continued)

• Design materials so the layout is neat and uncluttered. • Provide a “calm” environment, with soothing colors. No decorations or distraction. • Provide a high-reinforcement environment—reward good behavior and completion of all tasks that are asked of the children, using positive language. • Organize items in an orderly way. • Distinguish important information by putting it in bold or color. • Use large print. • Help pupils follow text by writing/ highlighting alternate lines in different colors. • If the pupil needs to work through a series of questions, help them keep their place by using a marker. • Use brief and clear instructions. • Allow ample rest periods and exercise breaks. • Minimize surprises.

Recommendations

214 S. S. Phalke and A. Shrivastava

Paper details

Guideline development for technological interventions for children and young people to self-manage attention deficit hyperactivity disorder: realistic evaluation. [12]

Designing real-time assistive technologies: a study for children with ADHD. [13]

An assistive technology design framework for ADHD. [9]

Sr. no

3

4

5

Table 1 (continued)

Framework concept to design assistive technologies for the management of ChADHD. Personal recommendations based upon the suggestions and observations from the literature survey of empirical studies of different assistive technologies for ChADHD.

Explicitly describes and states the guidelines to design and develop only wearable HI devices for the ChADHD. Empirical study conducted with real-time users.

These are the realistic recommendations for the designing of the user flow of the HI applications for the ChADHD. These guidelines are valid for children of age group 8–11 and for self-management only Interview based validation with parents, young children and adults with ADHD and clinicians. The study lacks in actual experimentation and validation with field remedial experts, and designers who will design these interactions.

Areas of recommendations and its basis

Assistive Technology must • Provide structure to facilitate activities. • Minimize distractions. • Encourage praise and rewards. • Integrate and report standardized ADHD measures. (continued)

Assistive tool must be • Interesting and intuitive. • Include rewards. • Effective and discreet notifications. • Provide guidance. • Ubiquitous feedback.

• Intervention should have positive rewarding feedback system. • Personalizable and adaptable components. • Easily accessible. • Integration of self-management. • Personal and environmental context-based intervention.

Recommendations

HI Applications for ADHD Children: A Case for Enhanced Visual … 215

Paper details

TangiPlan: Designing an assistive technology to enhance executive functioning among children with ADHD. [10]

Guidelines to design tangible tabletop activities for children with attention deficit hyperactivity disorder. [11]

Sr. no

6

7

Table 1 (continued) Recommendations

Recommendation for designing tangible tabletop activities for the ChADHD. Most of the guidelines are general and applicable to the design of any interactive application oriented to ADHD children. Can be applicable for other neuro typical disorder children also. A mixed approach study based upon the recommendations from the literature and experimentation with children. The guidelines identified from literature were used to design the table tops and test with the children.

• The level of difficulty of the game should be adaptable. • Objective of the game should be clear. • Keeps aware of the time. • The manipulative possibilities of the tabletop should be potentiated. • Controlled by caregivers. • Should promote search for information and identify alternatives. • Positive and encouraging feedback. • Should enhance selective attention. • Interesting and motivating. • Collaborative problem-solving platform. (continued)

States the recommendations about the care and methods to be • Facilitate organization, time followed while using the HI assistive technology and how they should management, and planning. be forced implemented through the assistive technology while using. • Involve caregivers in the process, but strive to reduce conflict. An empirical study was conducted in two major phases. Phase one • Implement intervention techniques expert interviews and surveys and phase to paper prototype testing suggested by experts. with real-time users. • Avoid distraction by mobile phones. • Avoid intrusion.

Areas of recommendations and its basis

216 S. S. Phalke and A. Shrivastava

Paper details

Gamification in E-learning process of children with attention deficit hyperactivity disorder. [15]

Sr. no

8

Table 1 (continued) Recommendations

A specific set of recommendations in form of methods and framework • Objectives of the game should be to design e-learning applications for the ChADHD. Recommendations clear and specific. • Child-centric content and player are purely based upon the recommendations from the literature. customization. • Ease of use and use of several buttons to make the interaction fun. • Easy to deploy.

Areas of recommendations and its basis

HI Applications for ADHD Children: A Case for Enhanced Visual … 217

218

S. S. Phalke and A. Shrivastava

Fig. 2 Interview session with two field experts

of these guidelines to achieve clarity of communication. We do this by administering design tasks, as detailed in the next section.

5 Administering Design Tasks We noted the feedback of the nine designers involved in the design task on a systematically designed feedback form seen in Fig. 3. The reliability of the form is 82% when validated with a group of 7 researchers. The author who monitored a specific session noted down all the post-designing feedback of the designers. The author also noted the queries of the designers or that raised during the task. The author resolved almost all the queries immediately. These queries are evidence that the existing guidelines and the guidelines obtained from the remedial experts needed to be more precise, and more the designer’s creating confusion for designers. This fact helps us to support our statement that the existing guidelines are often unavailable or present in a format unsuitable for designers. Further, all the designers used different design software during the design process. However, the needs and concerns of all the designers were almost similar. It is evident from these concerns that the incomplete guidelines did mislead the designers—for instance, the font type needed to be more precise. The designers had no clear idea of which font type to use, and identification of font type based on the attributes wasted much time, creating a stressful situation for the designers. In another instance, the illustrations and color combinations needed to be well-defined, and designers found it challenging. Due to this, two of the designers left the task incomplete. Figure 4 represents one such instance. From Fig. 4, we can see that a stressful situation due to incomplete guidelines can lead to incomplete designing of the HI application. If caregivers present such applications to children living with ADHD (ChADHD), it will counter-impact these children. The artifact designer shown in Fig. 5 also had to identify the font and the border while designing. This excess stress, even in the presence of design guidelines, led to the giving up of the designer. If such problems are unaddressed, these may lead to many noticed and unnoticed problems and challenges in the domain of

HI Applications for ADHD Children: A Case for Enhanced Visual …

219

Table 2 Initial set of novel guidelines Sr. Categories no

Recommendations

1

Screen layout

• • • • •

2

Syllabus/e-content • The amount of the e-content should not be reduced or increased. It of the redesigning should be same as that in the artifact. artifact • Can divide the e-content across sections and subsections.

3

Font type

• Should replicate similar to the school child handwriting. • Should have the attributes of being clean, and readable. • Use different weights for different body styles representations.

4

Font size

• Use different font sizes to represent different body styles. • Preferred font size for a running text on a A4 page frame should be 24 pt. • In pixels: divide the width (smaller side) of the frame by 24.8 and select the closest integer for font. • In cm: Divide the width of the frame by 1.14 and select the closest integer.

5

Frame colors

• A single frame should consist of maximum of three colors. • Colors used in design: Analogous (Adjacent colors), Opposite colors (Complimentary), Triadic colors, combinations of colors found in nature. • Individual choice of tints and hues based upon the scenario of the context. • Use warmer colors for foreground elements. • Use white color or smoothening light colors for background.

6

Illustrations

• Should replicate the object, subject, or the context seen in everyday life. • Use of Cartoons is a must. • Prefer use of non-human figure cartoons.

7

Videos, animations, and sounds

• • • • • •

8

User Flow of the HI applications

• The user flow of any HI application should be interactive in a gamified form. • A reward mechanism-based user flow. Having mini-games in form of challenges. • Have minimum surprises. • Set levels and stages in accordance to the syllabus and content. • The games can be in numerous forms like MCQs, true-false, puzzles, drawing and painting, and any form preferred by the designer for the context.

Use a relevant grid system for the representation of the e-content. Make design as clean as possible. Highlight major point and quotes. Organize in an orderly way. Use calm backgrounds.

Prevent the use of moving object. Use minimum videos possible. The playback speed of moving objects and animations should be 1X. Prevent the use of brighter backgrounds in the videos. Use sounds with lower pitch. Sound heard in nature are pleasing for the ChADHD.

220

S. S. Phalke and A. Shrivastava

Fig. 3 Feedback form. Left: Front side. Right: Back side Fig. 4 Incomplete artifact design

HI applications. The suggestions from the designers helped us form a new set of guidelines further validated by the experts. The new set of guidelines combines the adopted guidelines from the literature and a new set of identified guidelines that the experts validate. All nine designers reported their queries and feedback against this form. The profile of the nine designers can be seen in Table 3.

HI Applications for ADHD Children: A Case for Enhanced Visual …

221

Fig. 5 Administration of design tasks

Table 3 Profiles of designers involved in the redesign task Sr. no

Profession

Experience

Gender

1

Bachelor’s of design student

Third-year design student, with an experience of freelancing in visual design for two years

Male

2

UX designer

Master’s degree in design. 5 years of design experience as a UX designer in e-learning company

Female

3

Graphic designer

Bachelor’s degree in design and 1 year of industry experience

Female

4

Bachelor’s of design student

Third-year design student, with an experience of freelancing for one year

Male

5

Bachelor’s of design student

Third-year design student

Male

6

Research scholar in design

Masters in Architectural Design. Experience of working in visual design for 4 years

Male

7

Research scholar in design

Six years of industry design experience

Female

8

Research scholar in design

Maters in Design and a co-founder of a startup. 2 years of design experience

Male

9

Research scholar in design

Having 3 years of design teaching experience

Female

This study had a relevant number of male and female participants, with five male and four female participants. Figure 5 shows the design task sessions. The summary of the queries and feedback reported by these designers is divided into various categories as follows: Screen Layout The feedback obtained for the screen layout is as follows: – Need to define the border parameters for the screen layout, for instance, the thickness and the spacing of the border. – Justifications for the type of layout for a specific e-content.

222

S. S. Phalke and A. Shrivastava

– Define the sequence of texts, pictures, table, notations, and all miscellaneous e-content on the screen. – The grid and gutter should be defined. Use a 4 × 2 grid and a 20 gutter. – Define the number of words, characters, length of sentences, and paragraphs to be used for the representation of the HI applications. – Specify the line spacing. Preferably double-spacing fits, the attributes of clean, clear, and readable text on the HI applications. Font The feedback on all the font attributes like style, type, and size. – Define a specific font type. The attributes are not enough. Suggested font types are Cavolini, Comic Sans, and Mali. – Font color should be defined. Preferably use black color for font. – Font style should be defined for each of the body style of the e-content used in HI applications. – Define the size for different types of HI applications. Colors Here we state the feedback on the color schemes and combinations provided by the designers during the design task. – Define specific color schemes for a specific type of e-content in the HI applications. – Maximum three colors in a single frame does that include the background color and the font color. – Context-based color recommendations are necessary as difficult to judge and define the color combinations. Illustrations and Animations The feedback for the illustrations is as follows: – Due to the guideline’s restrictions, it is difficult to find illustrations. A pregenerated library is necessary. – What type of illustrations are necessary, line drawing, 2D drawing, or 3D that must be defined? – Need details on how to generate relation of illustrations with the text for the ChADHD. – Can we use different combinations of hues, shades and tints while coloring the illustrations? – Ration of animation and pictures to the text and context should be defined. Sounds and Music – The sound library should be pre-defined. – The type of feedback sound should be defined. Should it be male or female?

HI Applications for ADHD Children: A Case for Enhanced Visual …

223

– The tones and prompts should be pre-defined before designing. Use Flow The designers didn’t feel difficult while using the guidelines on user flow for the use. Designers were happy with the freedom to design in terms of the user flow with a condition of not breaking the flow in the original artifact. The nine designers provided all the above recommendations. At least three designers reported each of the recommendations after their tasks. The outcomes show that the guidelines delivered by the experts and literature need to be revised. Thus, reducing the HI application’s effectiveness could have been achieved if the guidelines were more specific.

6 Analyzing and Reporting Based on the suggestions and feedback of the designers, we prepared an updated version of the guidelines. These guidelines are relevant to use. However, it becomes essential to test these guidelines, as said by McKnight [6]. We again conducted a survey with five remedial exerts to validate these guidelines. All five experts were females teaching or/and heading the remedial centers. The average similarity percentage for each expert recommendation is as high as 96%. Thus, the new valid set of visual design guidelines is presented in Table 4. The new guidelines will assist the designers to design and develop effective HI applications for the ChADHD. These visual guidelines combine an adopted and new set of visual design guidelines that are well-validated by experts and designers. However, the study has certain limitations. It limits providing relevant and exact illustration library, color combinations, type of audio, and video to be used in the HI applications. Further, researchers can study, identify, test, and validate the visual design guidelines with ChADHD and children having other disorders and disabilities such as Autism, Fragile X syndrome, and Autism.

7 Conclusion This paper contributes relevant visual design guidelines for designing practical HI applications. The proposed guidelines could enhance the attention span of the ChADHD. These guidelines are rigorously iterated and validated by two groups of experts—(a) remedial experts and (b) designers. The proposed guidelines consist of both adapted guidelines as well as novel guidelines. Until now, the visual design guidelines were either opinion based or based on the literature survey. Often, it resulted in failures to assist the designers in designing effective representations in HI

224

S. S. Phalke and A. Shrivastava

Table 4 The new set of validated guidelines Sr. Categories no

Recommendations

1

Screen layout

• Standard border layout with no-border lines. • Prefer a A4 page style format to generate relevance with their school books. • 4X4 grid system for the representation of the content. • Make design as clean as possible. • Highlight major points and quotes. • Organize in an orderly way. • Use white backgrounds. • Prevent the use of textures and patterns. Keep it simple. • Use double line spacing.

2

Syllabus/content • The amount of the content should not be reduced or increased. It of the should be the same as that in the artifact. redesigning • Can divide the content across sections and subsections. artifact • Simplify the complex sentence and jargons wherever possible (Specifically done by content writers and designers). • Short and complete sentences. • Use less of jargons and quotations. • Use motivational text.

3

Font-type

• Use font type like Comic Sans, Mali, and Cavolini. • Should have the attributes of being clean, and readable. • Use of different font styles like bold, italic, and underline for depicting different body styles representations. • Use different weights for different body styles representations. • Use black font color.

4

Font-size

• Use different font-size to represent different body style. • Preferred font size for a running text on a A4 page frame should be 24 pt. • In pixels: divide the width (smaller side) of the frame by 24.8 and select the closest integer for font. In cm: Divide the width of the frame by 1.14 and select the closest integer.

5

Frame colors

• A single frame should consist of maximum three colors excluding the background color and the font color. • Colors used in design: Analogous (Adjacent colors), Opposite colors (Complimentary), Triadic colors, combinations of colors found in nature. • Individual choice of tints and hues based upon the scenario of the context. • Use warmer colors for foreground elements. • Use light colors in the background. • Use white color for background. • Black color for all the fonts. (continued)

HI Applications for ADHD Children: A Case for Enhanced Visual …

225

Table 4 (continued) Sr. Categories no

Recommendations

6

Illustrations

• Should replicate the object, subject, or the context seen in everyday life. • Use of Cartoons is a must. • Prefer use of non-human figure cartoons. • Should be culturally unbiased.

7

Videos, animations, and sounds

• • • • • •

8

User flow of the HI applications

• The user flow of any HI application should be interactive in a gamified form. • A reward mechanism-based user flow. Having mini-games in form of challenges. • Have minimum surprises. • Set levels and stages in accordance to the syllabus and content. • The games can be in numerous forms like MCQs, true false, puzzles, drawing and painting, and any form preferred by the designer for the context.

Prevent the use of moving object. Use minimum videos possible. The playback speed of moving objects and animations should be 1X. Prevent the use of brighter backgrounds in the videos. Use sounds with lower pitch. Sound heard in nature are pleasing for the ChADHD.

applications. However, as evident in our exercises involving designers, these guidelines are now in a form that the designers can easily comprehend for a meaningful application in their representations. The present study has certain limitations as well. We should have had the opportunity to include ChADHD in our validation exercises. In the future, we would have to complement our understanding of visual design by carefully reviewing illustrations, audio, and cartoons with ChADHD. Acknowledgements We sincerely thank all the designers, and remedial experts for their contributions in this study. We are further grateful to the Institute’s ethical committee for their guidance and ethical approval. Lastly we thank fellow researchers for their support.

References 1. Nelson R, Staggers N (2016) Health informatics-E-book: an interprofessional approach. Elsevier Health Sciences 2. Goncalves S, Ferreira BEB (2022) Technological and digital convergence, emergency remote education and ADHD students who attend the final years of elementary school. Texto Livre 14 3. Supangan RA, Acosta LAS, Amarado JLS, Blancaflor EB, Samonte MJC (2019) A gamified learning app for children with ADHD. In: Proceedings of the 2nd international conference on image and graphics processing, pp 47–51

226

S. S. Phalke and A. Shrivastava

4. Jena AK, Devi J (2020) Lockdown area of COVID-19: how does cartoon based e-contents effect on learning performance of indian elementary school students with ADHD. Online Submission 8(4):189–201 5. Garrett JJ (2006) Customer loyalty and the elements of user experience. Des Manag Rev 17(1):35–39 6. McKnight L (2010) Designing for ADHD in search of guidelines. In: IDC 2010 digital technologies and marginalized youth workshop, vol 30 7. Campbell LN (2020) Differential effects of digital versus print text on reading comprehension and behaviors in students with ADHD. https://doi.org/10.13023/etd.2020.300 8. Ben-Yehudah G, Brann A (2019) Pay attention to digital text: the impact of the media on text comprehension and self-monitoring in higher-education students with ADHD. Res Dev Disabil 89:120–129 9. Sonne T, Marshall P, Obel C, Thomsen PH, Gronaek K (2016) An assistive technology design framework for ADHD. In: Proceedings of the 28th Australian conference on computer-human interaction, pp 60–70 10. Weisberg O, GalOz A, Berkowitz R, Weiss N, Peretz O, Azoulai S, ..., Zuckerman O (2014) TangiPlan: designing an assistive technology to enhance executive functioning among children with ADHD. In: Proceedings of the 2014 conference on Interaction design and children, pp 293–296 11. Cerezo E, Coma T, Blasco-Serrano AC, Bonillo C, Garrido MA, Baldassarri S (2019) Guidelines to design tangible tabletop activities for children with attention deficit hyperactivity disorder. Int J Hum Comput Stud 126:26–43 12. Powell L, Parker J, Harpin V, Mawson S (2019) Guideline development for technological interventions for children and young people to self-manage attention deficit hyperactivity disorder: realist evaluation. J Med Internet Res 21(4):e12831 13. Sonne T, Obel C, Gronbaek K (2015) Designing real time assistive technologies: a study of children with ADHD. In: Proceedings of the annual meeting of the australian special interest group for computer human interaction, pp 34–38 14. Kusumasari D, Junaedi D, Kaburuan ER (2018) Designing an interactive learning application for ADHD children. In: MATEC web of conferences, vol 197. EDP Sciences, p 16008 15. Putra AS, Warnars HLHS, Abbas BS, Trisetyarso A, Suparta W, Kang CH (2018) Gamification in the e-learning process for children with attention deficit hyperactivity disorder (ADHD). In: 2018 Indonesian association for pattern recognition international conference (INAPR). IEEE, pp 182–185

IoT and Data Analytics

Trimmed-TDL-Based Time-to-Digital Converter for Time-of-Flight Applications Implemented on Cyclone V FPGA Moisés Arredondo-Velázquez , Lucio Rebolledo-Herrera , Javier Hernandez-Lopez , and Eduardo Moreno-Barbosa Abstract This article presents a Time-to-Digital Converter (TDC) architecture based on a Tapped Delay Line (TDL). Unlike the works found in the state of the art, where the TDL has a propagation time equivalent to the period of the system clock, the TDL of the proposed architecture can be shortened to use only 70% of the necessary length to propagate a clock cycle. To achieve this reduction, the clock signal is propagated throughout the TDL. An encoder based on transition detectors and ones-counters is employed to determine the clock’s phase captured in TDL. The architecture was implemented in a Cyclone V FPGA, and the experimental results showed 6.06 ps and 26.95 ps of precision and resolution, respectively. Additionally, the proposed design strategy saves hardware resources, allowing the exploration of multichannel measurement systems in low-end devices. Keywords Field programmable gate array (FPGA) · Tapped delay line (TDL) · Time-to-digital converter (TDC)

1 Introduction Time-to-Digital Converters (TDCs) are instruments used to measure time intervals from digital signals. They play an important role in applications such as Light Detection and Ranging (LiDAR) [8], Positron Emission Tomography (PET) [17], and This research was supported by Conacyt, PA PIIT-IG 100322 project for Nuclear Science Institute at the National Autonomous University of Mexico and the project “Design, simulation and instrumentation of a beam monitor for a particle accelerator” by the VIEP BUAP. M. Arredondo-Velázquez · L. Rebolledo-Herrera (B) · J. Hernandez-Lopez · E. Moreno-Barbosa Faculty of Physical and Mathematical Sciences, Meritorious Autonomous University of Puebla, Puebla, Mexico e-mail: [email protected] L. Rebolledo-Herrera Institute of Nuclear Sciences, National Autonomous University of Mexico, PO Box 70-543, 18 CDMX 04510, Mexico © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_17

229

230

M. Arredondo-Velázquez et al.

Fig. 1 Example of input signal for width measurement

Particle Physics Experiments (PPE). These applications are mainly based on measuring the time of flight of particles generated by a radiation source. In the PET and PPE scenarios, detectors usually require tens or hundreds of readout channels to acquire complete information about generated radiation. Each measurement channel consists of a TDC (among other elements such as photosensors, scintillators, and signal conditioning circuits) in whose design the improvement of performance (in terms of resolution, precision, and linearity) is usually explored by the addition of hardware resources. This makes commissioning prototype-level systems challenging since only high-end devices contain enough hardware resources [10]. In this sense, it has been observed that low-resource TCDs designs are still a little-explored research area. An example of an input signal IS to be measured is shown in Fig. 1. The IS can represent incident radiation in a scintillator; thus, the charge can be estimated by measuring its width W . Typical approach (or the so-called Nutt method) for width measurement consists of coarse and fine counting [13]. Coarse counting (CC) is based on the fact that input signal width is longer than the TDC clock period (T ) and can be quantified by means of clock cycle counting (C). On the other hand, fine counting (Also called fine measurement (FM)) corresponds to the time between any of the IS edges and the CLK rising edge. According to Fig. 1 two fine measurements must be done (T (1) and T (2) ). Therefore, ISn width is the time to be measured (Wn ) and can be expressed as Wn = ((C + 1) × T ) − Tn(2) + Tn(1) when Tn(1) ≥ Tn(2) and Wn = ((C + 1) × T ) − Tn(2) + Tn(1) + T when Tn(1) < Tn(2) . The component that measures fine intervals (Tn(1) and Tn(2) ) is commonly referred as interpolator. Some state-of-the-art TDCs designs have been implemented utilizing applicationspecific integrated circuits (ASICs). They can get sub-picosecond resolution by employing both digital and analog methods. Nonetheless, implementing a dedicated TDC on ASIC requires a relatively long development time and demands specialized knowledge. Instead, time measurement has been performed using fully digital strategies using Field Programmable Gate Arrays (FPGAs), which are often considered a promising alternative for a system requiring fast deployment at a reasonable cost because of their flexibility, faster development phase, and lower implementation cost than ASICs. Within TDC’s fully digital design strategies, there are three main variants. One of them is the Vernier Delay Line (VDL). It uses two clock signals of similar frequencies to measure the time interval. The resolution of Vernier TDCs is determined by the period difference between the two oscillators. VDL-based TDCs are mainly explored in ASIC technology. Although there are works that use FPGA devices [21, 22]. This

Trimmed-TDL-Based Time-to-Digital Converter …

231

method implies that a higher resolution normally leads to a longer measurement of dead time, as more oscillation cycles are needed to measure the time interval [24]. Thus, it may be inadequate for high-speed measurements in systems requiring many measurement channels. Multiple-Phase Clock (MPC) method is another alternative; it generates multiple clock signals with different clock phases each. The idea is that N clock channels with different phases will sample the signal to be measured. This technique makes it difficult to obtain a high resolution because the number of phases that can be obtained is usually not greater than 16 when high-frequency clocks are generated. So, the best achievable resolution in this method is limited to the Voltage Controlled Oscillator frequency (fVCO) of PLL, which can deliver a minimum clock phase of (1/fVCO) × 8 for a maximum oscillation frequency of around 1.6 GHz [14, 23]. The third strategy is the Tapped Delay Line (TDL), which represents the dominant research area because of attainable performance, such as time precision, dead time, and multi-channel capability [19]. For FM, a TDL-Based TDC relies on carry elements (A.K.A. Delay Elements (DE)) for quantifiable propagation time. Overall, they can serve as fine gauges employing their intrinsic propagation delays [11]; therefore, FM can be estimated based on how far the input signal is delayed. Here, the intrinsic average propagation delay of each DE determines the resolution. Commonly, additional components such as a pulse shaper (PS) [19, 20], and postprocessing (composed by an encoder and a calibration stage) components are used, which are intended to enhance this resolution. The typical purpose of PS is creating either a Pulse whose Width is Equivalent to FM (PWEFM) [14, 23] or multiple transitions according to the Wave Union (WU) methods [6]. From the aforementioned TDL-based TDC components, additional configurations have been used in order to improve the TDC performance in terms of resolution, precision, and linearity (A detailed description of these and additional metrics can be found in [2]). Examples of these strategies are the multi-chain [9, 12, 15, 16] and multisampling methods [5, 7]. In multi-chain TDL, each channel has more than one TDL [14]. Here, the signal generated by the pulse shaper is propagated through multiple TDLs, and each TDL has its corresponding capture module. The captures made are encoded and averaged to obtain a final measurement. Work [3] uses the multichain method in combination with a WUA launcher to explore a different number of parallel TDL arrays (fOUT). Area occupancy and power consumption increase when increasing fOUT, but resolution and precision gain are saturated around fOUT=4. Work [12] paper presents a parallel multi-chain cross-segmentation method that merges multi-chain into an equivalent chain, achieving a low-temperature coefficient and maintaining high precision. In the multisampling method, several captures of the pulse o signal propagated along the TDL are made. To make each sample, replicas of the clock signal with different phase shifts are created. In the same way, the captures are coded and averaged to obtain the final measurement. Despite the outstanding results obtained by these multi-chain and multisampling methods regarding temporal precision, the number of implemented channels is reduced proportionally to the parallelism degree used [1]. For this reason, these architectures may not be economical for large-scale measurement systems.

232

M. Arredondo-Velázquez et al.

In state-of-the-art designs, the clock frequency determines the chain length (TDL). Ideally, the total delay of the TDL should have a propagation time equal to T . Nonetheless, in most cases, architectures are designed so that this propagation is longer than the clock period. In this article, a trimmed-TDL-based TDC is presented. The design strategy lets use only 70% of the TDL. Nevertheless, it is potentially possible to use only half of the TDL. To achieve this reduction, the clock signal is propagated through the TDL. With the rising edge of the input signal, the state of the CLK is latched. To determine the phase of the captured clock signal, a set of transition detectors, ones-counter modules, as well as a transition position detection module are used. To address the bubbles problems, the global bubble error correction capacity of the ones-counters modules is leveraged. The maximum frequency of RAMs for the device used defines the maximum frequency of operation of the proposed architecture, which is 300 MHz. The rest of the document is organized as follows: Sect. 2.1 describes the proposed architecture as well as each of its fundamental components. Section 3 provides the methodology to validate the performance results of the proposed architecture. The results regarding resolution, precision, linearity, and hardware resources are compared with other state-of-the-art studies. Finally, Sect. 4 concludes this work.

2 Hardware Architecture Figure 2 shows a representation of the proposed architecture. Unlike the current trend in architectures based on TDLs (where the equivalent propagation of the TDL must be equal to or slightly greater than the clock period), it is proposed to shorten the chain of elements and ideally use half of the DEs. This potentially reduces the TDL length by 50% without sacrificing resolution. In principle, the resolution of the TDC based on TDL is equal to the average propagation time of the DEs of the TDL. Nonetheless, by using a PS it is possible to transform the input signal into a sequence of transitions, where the resolution will increase depending on the number of transitions. The clock signal (CLK) has two transitions (rising edge and falling edge), with which it is possible to double the resolution of a TDL. Therefore, this

Fig. 2 Representation of the proposed TCD architecture. Theoretically, there is a 50% clock cycle so that the TDL could be cut in half

Trimmed-TDL-Based Time-to-Digital Converter …

233

Fig. 3 Proposed capturing method. According to the position of the IS rising edge, the previous CLK semi-cycle will be captured

work proposes propagating CLK through TDL. Thus, it is possible to halve the TDL while doubling the resolution due to the two transitions in CLK. In Fig. 2, the elements illustrated with dotted lines are those that are eliminated, and only the output values of the remaining elements are taken into account for encoding. Figure 3 depicts the captured code for the proposed architecture. A free-running clock signal is propagated in the TDL. IS works as a clock signal for the FFDs; thus, when the IS rising edge occurs, the position of CLK is captured. This position is represented by capturing half of the clock cycle immediately before the rising edge of IS. In Fig. 3, IS0 , IS1 , IS2 ,, and ISn represent different IS arrival times. It is considered as Tn(1) = 0 when the rising edges of both signals coincide (IS0 ). In this case, the captured code corresponds to the negative semi-cycle of CLK (shown in blue in the figure). As the IS signal arrives later (90◦ , 180◦ , and 225◦ , for IS1 , IS2 , and ISn , respectively), the captured code will change as shown in the figure.

2.1 Encoder Figure 4 shows the diagram of the proposed encoder. This configuration is based on m transition detectors TD, m ones-counter modules (OCM), and a transition position detection module (TPD). The FFDs output that contains the CLK status is grouped into m segments where a TD and an OCM are used. Each OCM receives TSZ bits for ones counting (TSZ = 3 bits in the image). In turn, TD takes OCM output to determine if that segment has a transition. For this purpose, a comparison between binary codes in each segment is made with a threshold (TH ). TD output will be 1 when the threshold is exceeded and 0 otherwise. In this way, it is possible to obtain a bubble-free code as long as TSZ is greater than the bubble zone (BZ). The TD outputs are input into the TPD, which has the function of determining the position of the transition. If the TDL is long enough to capture the period of the entire CLK cycle, it would be sufficient to find either the CLK rising or falling edge to determine its phase. Nevertheless, according to the proposed strategy, both edges are not always present in the captured code. Therefore, the TDP is a LUT that indicates both the transition’s position and the transition type. In the case that two transitions are present, the position of the rising edge will be indicated.

234

M. Arredondo-Velázquez et al.

Fig. 4 Block diagram for proposed encoder

Based on the position of the transition and its type, the encoding will be obtained based on two functions:  y=

(TSZ × P − 1) + OCM (m − P) + OCM (m − P + 1) case1 (TSZ × P − 1) + (2 × TSZ) − OCM (m − P) − OCM (m − P + 1) case2

(1)

where y is the encoder output, and OCM(j) represents the OCM output at the j-th position. P indicates the CLK rising edge position if it exists (case 1) or the position of the falling edge otherwise (case 2). This position is found implicitly in the encoder since it is represented by the selectors of the multiplexers that modify the operation carried out on the OCMs output.

3 Experimental Results To test the proposed architecture, the configuration shown in Fig. 5 was used. For test purposes, the captured values in the FFDs and the encoder output are read by the SignalTap Logic Analyzer tool (STP) to be analyzed on a PC. To determine the length of the trimmed TDL, several CLK samples were collected, adjusting the TDL in order to capture a complete clock cycle. The number of ones and zeros was counted to know the duty cycle. The range of ones and zeros was [201, 264] and [310, 332], respectively. This variation is because the rise and fall times in the basic elements of the FPGA are different. Thus, when a signal propagates, the positive pulse shrinks.

Trimmed-TDL-Based Time-to-Digital Converter …

235

Fig. 5 Test setup for proposed TDC performance validation

The mean ranges (232.5 and 321) are taken to establish the TDL length (553) as well as the clock duty cycle (42%). To ensure that only the two aforementioned cases occur in the encoder, at least 58% of the TDL must be encoded. Additionally, the area where the bubbles appear is quantified, resulting in a range of [2, 34] elements when the CLK transition is 0-1 and [2, 32] when 1-0. These ranges are added to the percentage of the TDL that is encoded. Thus, 384 length elements are selected for the TDC implementation. Tests were performed for different numbers of TD. The one with the least number of zero-width bins and the lowest resolution was selected. Finally, The encoder is designed in such a way that its output is 10 bits since the number of effective bins is greater than 512. The cyclone V FPGA used contains six high-performance PLLs which can receive a static configuration (From synthesis) or dynamic (User logic). A PLL (PLL0) output is used to generate the 300 MHz clock signal and propagate it along the trimmed TDL. An additional PLL (PLL) is used to generate a periodic signal whose pulses serve as test signals. A simplified representation of the PLLs available on the cyclone FPGA is shown in Fig. 6. In simple terms, to achieve the desired frequency, duty cycle and phase in the output signals, mainly the counters N ,M , and Ck have to be defined, where N = NH + NL, M = MH + ML, and Ck = CHk + CLk . The output frequency is Foutk = (Fin × M )/(N × Ck ) with a duty cycle Dk = CHk /(CHk + CLk ). The (MH , ML) and (NH , NL) pairs are selected equal so that the reference frequency FREF has a 50% duty cycle. FREF = Fin /N and must not exceed 325 MHz. Likewise, FVCO = (Fin × M )/N and frequencies higher than 1.6 GHz are not allowed. Each PLL can generate nine clocks, and a programming module is required to configure each of them dynamically. Using two PLLs allows more control over the

236

M. Arredondo-Velázquez et al.

Fig. 6 Phase-Locked Loop (PLL) simplified block diagram Table 1 PLLs selected configuration summaryPLLs selected configuration summary PLL0 PLL1 Input frequency Operation mode Pre-scale counter (N ) Feedback counter (M ) Post-scale divisor (C = CH + CL) Output frequency (Fout) Duty cycle (Dk ) PLL VCO frequency (FVCO ) Phase resolution Bandwidth

50 MHz Direct 1 (bypassed) 24 4=2+2

Direct 1 (bypassed) 30 250 = 75 + 175

300 MHz 0.5 1.2 GHz 104.16 ps High

6 MHz 0.3 1.5 GHz 83.33 ps High

attainable phase resolution. The phase Phk that can be added to each output k is Phk = 1/(FVCO × 8). Thus, the higher the output frequency, the lower the phase resolution of the signal. A summary of the parameters selected for each PLL is shown in Table 1. The physical implementation of the proposed architecture is illustrated in Fig. 7. The colors red, green, and yellow are used to highlight the TDL, encoder, and PLL configuration modules, respectively. Also, the clock region used is indicated by a blue box. The additional logic corresponds to STP. For the inference of the TDL, available adders in the Adaptive Logic Modules (ALMs) are used since they contain fast connections in their carry ports. Thus, the clock signal is propagated through these connections. The FFDs associated with the output of the adders are used to latch the TDL status; this allows minimizing the delay from the outputs of the adders to the FFDs since it is the shortest connection possible in the ALMs. This configuration achieves a homogeneous routing and contributes to improving the linearity. The TDL origin was arbitrarily selected and configured with TCL commands. The clock signal is input to the TDL using a regional clock routing, as the clock region provides the lowest skew in a chip quadrant. On the other hand, the test signal is connected to the

Trimmed-TDL-Based Time-to-Digital Converter …

237

Fig. 7 Physical implementation of the Trimmed-TDC on a Cyclone V FPGA

global clock routing. Global routing can receive signals from the general purpose I/O pins or from the pins dedicated to external clocks that can also be dynamically multiplexed, saving additional resources. To know the resolution, the code density test method is used, which consists of entering asynchronous signals into the TDC with respect to CLK. Because a single clock governs the PLLs of the device used, it is not possible to obtain completely asynchronous clock signals, so an AFG3101 function generator is used to generate arbitrary signals of 497.839491909 kHz, which are directed connected to the general purpose I/O pins. Figure 8 shows the histogram of 12000 samples encoded in hardware and sent on PCa and the Fig. 8b shows the histogram of the bin widths. The average resolution achieved is 6.06 ps for 553 equivalent bins. This statistically obtained resolution corresponds to the theoretical achievable resolution (1/(300 × 106 ))/553 = 6.02 × 10−12 . The differential nonlinearity (DNL) and the Integral nonlinearity (INL) can be derived from the bin distribution of the TDC. The DNL is the difference of the current bin minus the average resolution divided by the average resolution (shown in Fig. 9a ) while the INL (shown in Fig. 9b) is the accumulation of the DNL values. Both metrics represent the measurement error for each bin position. This error is typically addressed with the use of bin-bin calibration. Nevertheless, the implementation of this component is not considered in this work. To evaluate the TDC precision, the configuration shown in Fig. 5 was used. According to the parameters M , N and C in PLL0 the phase resolution is 83.33 ps. With dynamic phase shifting, we can control the position of IS concerning CLK. The FM measurement range is 3.3333 ns, which is divided into ten positions. The

238

M. Arredondo-Velázquez et al.

Fig. 8 Code Density Test results. a Estimated bin widths. b Derived histogram of bin widths

Fig. 9 Derived a DNL and b INL from the code density test results

initial position is 83.33 ps ×2, and steps of 83.33 ps × 4 are made, so the time intervals used as a test are 0.167, 0.500, 0.833, 1.167, 1.500, 1.833, 2.167, 2.500, 2.833, and 3.167 ns. For each position, 1000 samples are taken. Fig. 10a shows the arbitrary selected interval 2.167 ps as an example. The average of the measurements is 2.12 ps, representing an error of 0.047. The standard deviation (σ ) is used as an indicator of precision. At this point, σ = 23.17 ps. Figure 10b shows the precision for the ten tested time intervals. From these results, the average precision is 26.95 ps. Worth noting that non-linear efficiency is related to the chip manufacturing process variations. Besides accuracy, the range of obtained precisions can be improved through a calibration stage compensating for the inhomogeneous propagation delays in DEs. The mean of each time interval is shown in Fig. 11a. Despite not including a calibration component, the difference between the TDC prediction and the ideal response is insignificant since the mean square error (MSE) equals 9.4303e-22. Figure 11b shows the error for every tested time interval.

Trimmed-TDL-Based Time-to-Digital Converter …

239

Fig. 10 TCD Precision. a Single Shot Precision (SSP) at 2.167 time interval. b Overall Precision (OP)

Fig. 11 Accuracy results. a Mean of each time interval measurement. b Difference between the ideal response and the TDC prediction

The performance results of the proposed architecture and recently reported FPGAbased TDC works are shown in Table 2. For each work, the used method (describing the type of PS, the active bins in the TDL (ABs), the operating frequency, and the encoder type) besides the results of resolution, precision, range, linearity, dead time and Mega Samples per Second (MSPs) are displayed. For the comparison, TDLbased TDC works were selected. The first metric corresponds to resolution. By intuition, it can be assumed that higher end FPGA families may offer better results. Nonetheless, the device family does not give improvement but the design strategy. The lowest resolutions are observed in the works that employ the multi-chain method and those that use DSPs as DEs. Only work [19] uses a single TDL with a PS to generate multiple transitions. In this sense, our work is similar to [19], two clock transitions are used instead without using a PS. Thus, the proposed architecture achieves a resolution below the

9

13 366 fs

43

45

2.03

17.4

22.2

6.2 (SSP)

18(SSP) >18(OP) 34 (OP-MC) 17 (OP-HP) < 21 8 rms (SSP)

18 (SSP) 19 (OP) 2.8

26.04 (SSP)

8.7(OP) 4.6(OP) 17 3.1(OP)

26.95 (OP)

Precision (ps)



3.4 10.3s



0−7.5 ns

47.9 ms (24b) 7s (32b)

262.14 us

– – 164 us –

3.33 ns

Range

b Channel

Vernier TDC (RO TDC) waveform generator (CWG) 1 SSP (Single Shot Precision) OP (Overall Precision) MC (Multi-Channel) HP (High Performance) 2 Symbols ≈, >, 150

– 5/>150

50/20

13.3/75

3.33/300

5.882/170

8/125

150 – 8 1000 ppm is monitored, which is considered a risk level for COVID-19 infection.

252

Y. Romero López et al.

Fig. 6 Data obtained from tests with the system without automated ventilation

Fig. 7 Data obtained from tests with the system and automated ventilation

A significant decrease in CO2 levels is observed in Fig. 7 with automated ventilation of the system that contains the fan, and the ppm is abruptly reduced in a shorter period than in the system without the fan activation as shown in Fig. 6. The data visualization is displayed on the ThingSpeak panel; it can be seen in Fig. 8 that the LED lights up when CO2 levels greater than 1000 ppm are obtained, and this is an indicator that the air quality is not suitable for the occupants inside the classroom.

8 Audio Notifications The embedded system contains a 3.5 mm. Female jack audio. Female, to which an audio player module is connected, where songs in MP3 format are played. Two

CO2 Monitoring System to Warn …

253

Fig. 8 Indicator lamp for CO2 levels > 1000 ppm

different audios are available to indicate the air quality levels and the possible existing risk of COVID-19 infection: * 500 < CO2 1000 PPM: Warning! The environment is not ideal for coexistence, carbon dioxide exceeds the risk levels; please evacuate and ventilate the area, thanks.

9 Device to Reduce CO2 levels When CO2 measurements are obtained above hazard levels, which are 1000 ppm, necessary measures will have to be taken to reduce CO2 levels; some occupants will have to leave the classroom, and windows and doors will have to be opened to reduce the spread of aerosols that would facilitate the contagion of COVID-19. The literature shows us that the internal airflow is an alternative for aerosol reduction [3], for which a fan will be used to help mitigate CO2 levels; this device will be driven with an integrated power stage system, which will monitor the levels of CO2 in case of reduction of the levels; the deactivation of the fan will be automatic. Figure 7 shows the effectiveness of this stage in reducing CO2 levels in the classroom.

254

Y. Romero López et al.

Fig. 9 Implemented prototype

The complete system can be seen in Fig. 9, where the fan is shown to reduce CO2 levels more effectively, the embedded system where the ESP32 SoC, sensor, LEDs, screens, and audio player are located that have the audio output through two speakers that will reproduce the sound notifications according to the CO2 levels obtained in the classroom.

10 Obtained Data Table From the graph observed in Fig. 7, the values can be obtained by the sensor and an analysis of these data can be carried out since this information is presented in a series of tables with the extension .CSV format. The data will be stored in the account linked to the ThingSpeak software, where we will have access to the dashboard and the data table. Figure 10 shows the table of values obtained, 3 columns are obtained in which the following can be observed: * Date and time of data obtained in 24-hour format. * Recorded data identification number. * Recorded measurement of CO2 in ppm.

CO2 Monitoring System to Warn …

255

11 Conclusions and Future Work It is essential to mention that our system seeks to be an auxiliary tool to warn about COVID-19 risk conditions; however, CO2 is not directly a tool to have a certainty of contagion; it could help us to take necessary measures on air quality, since if there is a poor air quality, the risk of contagion increases [8, 9]. Moreover, this article seeks to be a reference to electronic design and to present experimental results in natural environments. The system in the current stage of development has worked satisfactorily as it has an Internet connection with a database where the information acquired by the CO2 sensor is stored; in this way, a parameter is obtained to evaluate a possible more significant risk of contagion from COVID-19, and this will be advantageous for the works of the future, where traceability of the behavior of CO2 levels and their relationship with essential aspects such as climate, weather, number of occupants, etc. are studied. The integrated system for IoT-based monitoring seeks to address an existing shortcoming in current commercial devices, namely the generation of notifications or triggers to alert end users of decreasing concentrations of pollutants in the environment. Sensor accuracy and secure data storage should be another primary concern in addressing challenges in the IoT domain. A possible improvement would be the implementation of more efficient CO2 sensors, for example by using emerging technologies such as NDIR (Non-Dispersive Infrared Sensor) sensors with greater accuracy and longer life, as well as reducing the need for calibrations for extended periods. Open windows and a good heating, ventilation, and air conditioning (HVAC) system are starting points for keeping classrooms safe during the COVID-19 pan-

Fig. 10 Format of the table generated by the measurements obtained

256

Y. Romero López et al.

demic. However, programmed ventilation significantly reduces CO2 risk levels, thus reducing the likelihood of contagion in classrooms. In future work, it will be possible to make predictions of contagion with the data obtained, through an AI (Artificial Intelligence) system based on the areas of the classroom, the number of occupants, and the hours of excellent attendance in classes.

References 1. Organización panamericana de la salud (2022). https://www.paho.org/es/temas/calidad-aire. Lats access June 2022 2. Sultana S (2019) A comparative analysis of air pollution detection technique using image processing, machine learning and deep learning approach. Comp Anal Air Pollut Detect Tech Using Image Process Mach Learn Deep Learn Approach 19(1):26-30 3. Rencken Gerhard K, Rutherford Emma K, Ghanta Nikhilesh, Kongoletos John, Glicksman Leon (2021) Patterns of SARS-CoV-2 aerosol spread in typical classrooms. Build Environ 204:15 4. Saini J, Dutta M, Marques G (2021) Internet of things for indoor air quality monitoring. 1 era edición. Springer Nature Switzerland AG 5. O’Keeffe J (2021) Modos de transmisión y diseminación interhumana del virus SARS-CoV-2. Revista de salud publica del Paraguay 11(1):87–101 6. Air cleaning technologies for indoor spaces during the COVID-19 pandemic (2022) https:// cutt.ly/lX7cKfs. Last access 29 June 2022 7. Nettikadan D, Subodh Raj MS (2018)Smart community monitoring system using thingspeak IoT Plaform. Int J Appl Eng Res 13(17):13402–13408. ISSN 0973-4562 8. Indoor CO2 sensors for COVID-19 risk mitigation: current guidance and limitations (2022). https://ncceh.ca/documents/field-inquiry/indoor-co2-sensors-covid-19-riskmitigation-current-guidance-and. Last access 1 October 2022 9. Peng Z, Jimenez JL (2021) Exhaled CO2 as a COVID-19 infection risk proxy for different indoor environments and activities. Environ Sci Technol Lett 8(5):392–397 10. Particulate Matter (PM) Pollution (2022) https://www.epa.gov/pm-pollution/particulatematter-pm-basics. Last access 1 June 2022

High-Power Analysis for Outage Probability and Average Symbol Error Probability over Non-identical κ-μ Double Shadowed Fading Puspraj Singh Chauhan , Sandeep Kumar, Ankit Jain, and Raghvendra Singh Abstract Simpler statistics are always preferred in wireless communication. With this motivation, in this work, we have presented simpler analytical expressions for the outage probability and average symbol error probability (SEP). In the case of coherent average SEP, expressions are derived for scenarios that experience additive white Gaussian noise and additive Laplacian noise. All the expressions are derived for the scenario where independent paths undergo non-identical fading. Monte Carlo simulations are used to support each result that are deduced. Keywords Bit error rate · Diversity · Outage probability

1 Introduction Quality of Service (QOS) is an essential requirement for any communication system. To maintain this, various measures have been adopted by the research community. Random fluctuations in the received signal strength due to multipath propagation are one of the main sources of QOS degradation. The reason for the above is the reduction in the received signal strength due to the constructive and destructive addition of the multipath components arriving at the receiver. Thus, different signal improvement techniques are utilized to overcome the impediment introduced by the wireless channel. Among them, diversity is one, where either the transmitter (multiple-input– single-output) or receiver (single-input–multiple-output) and both the transmitter and receiver (multiple-input–multiple-output) are equipped with multiple antennas. Various efforts have been put forward to investigate the system performance with diversity reception over a wide variety of fading channels [1–5]. In [1, 2], authors P. Singh Chauhan · A. Jain · R. Singh (B) Department of ECE, Pranveer Singh Institute of Technology, Kanpur, India e-mail: [email protected] S. Kumar Central Research laboratory, BEL, Ghaziabad, India

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_19

257

258

P. Singh Chauhan et al.

investigate the physical layer statistics over the sum of generalized-K fading models in which main and eavesdropper channel experienced independent and identically distributed (i.i.d.) fading [1] and for [2] primary and secondary user has single antenna but destination and eavesdropper contains multiple antennas. Subsequently, in [3–5], the analysis over the sum of independent random variables is performed over α-μ, ημ, κ-μ, Fox’s H, and Fisher–Snedecor F fading channels. In addition, there are other distributions for which the exact analysis with diversity is far more complicated, as those representations have summation terms such as mixture of Gamma, mixture of Gaussian, and mixture of inverse Gaussian models. There are other models where the sum of fading random variables is given in approximated form [6], i.e., by moment of matching (MoM) methods. The MoM method typically results in a good fit in the sum probability density function (PDF) body and right tail, but it fits poorly in the left tail (i.e., close to zero). It turns out that the high-signal-to-noise-ratio (SNR) performance of a communications system using that channel is mostly controlled by the left tail of the channel distribution. Additionally, the high-SNR regime perfectly captures how energy-efficient a system is in establishing communication. As a result, from an engineering standpoint, it is significantly more crucial to establish a good fit in the left tail of the sum distribution than in the body or the right tail. In addition, the complex PDF and other performance metric expressions that were retrieved could be limited from practical perspective because of the potential for instability when the multivariate H function is assessed as a numerical integration [3]. As a result, it is frequently wished to derive simpler asymptotic expansions of the relevant quantities. Then again, due to how easily they may be calculated, asymptotic expansions provide a measure of the amount of interest rate of change in relation to the SNR over performance metrics. In this context, asymptotic performance analysis has been carried out over multiple fading channels both with diversity and without diversity [3, 6–13]. The characteristics of the wireless channel, in general, follow non-identical paths, thus it is of great importance to address the system performance assuming non-identical distributed paths. A more recent study has suggested more generalized double shadowing multipath fading models. Furthermore, it has been demonstrated that these models (Rician double shadowed, κ-μ double shadowed), in a variety of communication settings, offer a superb fit with the field measurement data [13–15]. When varying shadowing levels affect the received signal and direct signal propagation, this model can be described. A second cycle of shadowing is brought on by moving objects close to the transmitter (Tx) or receiver (Rx). In [13], authors have proposed a novel methodology to access asymptotic performance over κ-μ double shadowed fading models. However, the work addresses only independent and identically distributed multiple paths, and independent and non-identical distributed (i.n.i.d.) is still untouched. Also, the analysis presented is with additive white Gaussian noise only. This motivates us to explore the asymptotic system performance over i.n.i.d. double shadowed κ-μ fading. In this regard, we first derive the origin PDF for the single-input–single-output (SISO) model. Later, we incorporate multiple antennas at the receiving end where signal arriving through multiple paths experiences i.n.i.d.

High-Power Analysis for Outage Probability …

259

fading and origin PDF for maximal ratio (MR) combining, equal gain (EQ) combining, and selection combining are derived. In addition, we also derived the outage and average symbol error probability (SEP) expressions under the high-power regime. In the case of average SEP, expressions are evaluated for systems that have gone through additive white Gaussian noise and additive Laplacian noise. The organization of the work is as follows: In Sect. 2, we have elaborated the methodology adapted for evaluating the double shadowed κ-μ fading near to origin with further evaluation of PDF with diversity. Section 3 describes the evaluation of asymptotic expression for the outage probability and average SEP followed by their simulation results in Sect. 4. At last, concluding remarks are incorporated in Sect. 5.

2 System Model Let the transmitter is with single antenna, while the receiver is having multiple antennas. The signal arriving at the destination through different paths at different antennas encounter i.n.i.d. double shadowed κ-μ fading. The instantaneous SNR double shadowed κ-μ origin PDF over lth path is given by [13, Eq. (5)] f Y (γ) O ≈

μμ (1 + κ)μ m m γ μ−1 , (μκ + m)m B(m, ˆ μ)(mˆ − 1)μ γ¯ μ

(1)

where E[Y ] = γ. ¯ The end-to-end communication network makes use of several antennas to increase received SNR and preserve the necessary QOS. Different signalcombining techniques, including MR combining, EG combining, and selection combining, are frequently used to increase system reliability. MR combining produces the highest gain, followed by EG combining and selection combining. The diversity PDFs can be conveniently evaluated because the resulting origin PDFs are independent of summation terms and complicated functions. Figure 1 shows the origin PDF plot for double shadowed κ-μ distribution with different fading parameters. It is examined from the plot that as the system parameters m and γ¯ increase, a parallel down-shift will be observed along with asymptotes following the exact plot for a higher range.

2.1 Origin PDF with MR Combining Diversity The immediate output of the MRC combiner with L independently distributed branches that are non-identical to one another is given by [16] γmr c =

L  l=1

γi ,

(2)

260

P. Singh Chauhan et al.

Fig. 1 Origin PDF with varying system parameters

where γi signifies the lth branch instantaneous SNR. The moment generating function (MGF) for γmr c is given as Mγmr c (s) =

L 

Mγ,l (s),

(3)

r =1

where Mγ,l (s) is the ith branch MGF obtained by taking the Laplace transform of (1) using [17, Eq. (3.381.4)], it immediately follows: μ

Mγ,l (s) =

μl l (1 + κl )μl m lm l . (μl κl + m l )m l B(mˆ l , μl )(mˆ l − 1)μl γ¯ μl s μl

(4)

Substituting (4) into (3), and with the aid of [17, Eq. (3.381.4)], will yield  f Y (γ) ≈

L  l=1

˜ (μl ) AΓ γ¯ μl

L 

 Γ

μl −1

γ l=1 L 

,

(5)

μl

l=1

where A˜ =

μ

m

μl l (1+κl )μl m l l Γ (mˆ l +μl ) . (μl κl +m l )ml (mˆ l −1)μl Γ (m l )Γ (μl )

f Y (γ) ≈

The earlier expression (5) is re-expressed as ˜D R L 

γ¯ l=1

μl

L 

γ

l=1

μl −1

,

(6)

High-Power Analysis for Outage Probability …

261

˜ D is used to notate the parameter R ˜ for MR combining, EG combining, and where R selection combining diversity schemes. For MR combining scheme, the expression L

˜ (μl ) AΓ ˜ D = l=1  for R  . L  Γ μl l=1

2.2 Origin PDF with EG Combining Diversity Let the receiver have a number of antennas and the signal arriving through different paths exhibit non-identical fading. The output of the combiner is then given as [13] X egc =

L  Xl √ , L l=1

(7)

where X l represents the envelope PDF around origin for lth path and is deduced by applying the identity [18, Eq. (2.3)] as μ

f X (x) ≈

2μl l (1 + κl )μl m lm l x 2μl −1 . (μl κl + m l )m l B(mˆ l , μl )(mˆ l − 1)μl x¯ 2μl

(8)

Alike MR combining, the MGF for EG combining is defined as M X egc = √ (M X (s/ L)) L . Now, availing the methodology discussed for MR combining and with the aid of [18, Eq. (2.3)], the origin power PDF for EG combining is similar to L

2  μ  √ L =1 l L ˜ (2μl ) ˜ D = 2 L−1 AΓ (6). The value for R  L .  l=1 Γ 2 μl l=1

2.3 Origin PDF with Selection Combining Diversity Here, the receiver selects the branch with the highest SNR, i.e., γsc = max(γi ), i = 1, 2, . . . L. The output of the receiver will yield the cumulative distribution function (CDF) given as FYsc (γ) =

L  l=1

FY,l (γ),

(9)

262

P. Singh Chauhan et al.

where FY,l (γ) is the lth branch CDF and is deduced by simple mathematics given as μ

FY,l (γ) ≈

μl l (1 + κl )μl m lm l γ μl . μl (μl κl + m l )m l B(mˆ l , μl )(mˆ l − 1)μl γ¯ μl

(10)

Substituting (10) into (9) and differentiating both sides with respect to γ, one obtains the origin PDF for selection combining is similar to (6) with parameter  L  L ˜  A ˜D = R μl . To the best of our knowledge, the unified derived origin l=1 l=1 μl PDF expression for MR combining, EG combining diversity schemes given by (6) is novel and yet not reported in the literature.

3 Digital Communication System Performance Metrics To assess a communication system’s performance, various QOS metrics are utilized. The key performance criteria that are taken into account while designing digital communication systems and cognitive radio networks, respectively, are the outage probability, average SEP, average probability of detection, and average area under the receiver operating characteristics curve. The analysis of these performance metrics over various fading channels has been extensively researched in the literature, which highlights the significance of these performance parameters. In this part, we looked at the high-power expression for the probability of an outage and the average SEP.

3.1 Outage Probability Probability that the received signal strength at the diversity combiner output is below γ th f Y (γ)dγ is known as the outage proba fixed threshold SNR (i.e., P(γ < γth )) = 0

ability. The metric helps us in examining the transmission quality of the system. Plugging (6) into the previous relation yields L 

μl ˜D R l=1 P(γ < γth )) ≈  γ . th L   L μl  μl γ¯ l=1 l=1

(11)

High-Power Analysis for Outage Probability …

263

3.2 Average SEP Transmission of digital data gets penalized due to impediments occurring in the wireless channel. Average error probability is the metric used to quantify the amount and probability of a signal getting an error in a practical environment. The way to get the average bit/symbol error probability is to take the average of the statistics for AWG noise or AL noise over the channel characteristics given by [13, Eq. (14)] P¯e =

∞ Pe (γ) f Y (γ)dγ.

(12)

0

Coherent SEP with AWG Noise The unified representation of coherent SEP is given by [19, Eq. (17)]  (13) Pe (γ) = Acoh er f c( (Bcoh γ)), the where constants Acoh and Bcoh are enlisted in [19, Table I] and er f c(.) denotes  complementary error function. Substituting (6) and (13) into (12), setting B p γ = t and using [20, Eq. (2.8.2.1)] yields

P¯e ≈

˜ D Acoh Γ R √

 π

L 

 

L 

 μl +

1 2

l=1

L 

μl (γ¯ Bcoh )l=1

μl

.

(14)

l=1

Coherent SEP with AL Noise The conditioned SEP for binary phase shift (BPS) keying and quadrature phase shift (QPS) keying schemes is given as  √  √ Pe (γ) = a + b γ e2 γ ,

(15)

in which a and b are defined in Table 1. Plugging (15) and (6) into (12), setting √ γ = t, and with the aid of [17, Eq. (3.381.4)], we get ˜D 2R P¯e ≈  L γ¯ l=1

μl

⎧  L   ⎪ ⎪ ⎪ aΓ 2 μl ⎨ ⎪ ⎪ ⎪ ⎩

2

l=1 L  2 μl l=1

+

⎫  L  ⎪ ⎪ bΓ 2 μl + 1 ⎪ ⎬ 2

l=1 L  2 μl +1 l=1

⎪ ⎪ ⎪ ⎭

.

(16)

Non-coherent SEP with AWG Noise The instantaneous error probability is given as [16, Eq. (18)] Pe (γ) = A N Coh exp(−B N Coh γ), (17)

264

P. Singh Chauhan et al.

Table 1 Parameters for AL noise SEP Modulation scheme a BPS keying QPS keying

b

1 2 3 4

0 1

in which A N Coh and B N Coh are defined in [16, Table 2]. Substituting (17) along with (6) into (12) then applying [17, Eq. (3.381.4)], one obtains

P¯e ≈

˜ DΓ A N Coh R



L 

l=1 L  μl

 μl .

(18)

(γ¯ B N Coh )l=1

4 Numerical Analysis This section deals with the validation of the derived expressions through a few selected numerical simulation results. The results are compared with Monte Carlo simulations to confirm their preciseness. With nearly 106 samples, double shadowed κ-μ random variables are generated for Monte Carlo simulation. Outage probability versus average SNR is plotted in Fig. 2 for single and double MR combin-

Fig. 2 Outage probability versus average SNR with MRC diversity

High-Power Analysis for Outage Probability …

265

Fig. 3 Comparison of EG combining and selection combining over average SEP

ing diversity with κ = {1, 1.5}, μ = {1, 2}, m = {0.5, 1.5}, mˆ = {1.25, 1.5}, and γth = {0(d B), 5(d B)}. It is noted that the outage decreases as average SNR and diversity paths increase, while it increases as threshold SNR increases. It is further noted that a significant improvement is achieved as the diversity order increases under a lower threshold in comparison to the higher threshold. Figure 3 depicts the plot of average SEP for M-ary phase shift keying scheme for single, dual, and triple EG combining and selection combining diversity schemes with κ = {1.5, 2.5, 1.5}, μ = {2, 1, 1}, m = {1, 1.5, 1}, mˆ = {1.5, 2, 1.5}, and M = 4, 8. It can be observed from the figure that incorporating multiple antennas at the receiver improves the received signal strengths, which in turn lowers the error probability. A perfect agreement is observed among the plots originated from derived results and Monte-Carlo simulations at high power.

5 Conclusion In this article, we explored the sum of i.n.i.d. double shadowed κ-mu fading channel around the origin. These results were further utilized to access system performance in terms of the outage probability and average SEP under a high-power regime. Both AWG noise and AL noise were considered while deriving the results for coherent average SEP. The statistics presented here are general and valid for both integer and non-integer parameters, except m, ˆ which will acquire values greater than one. All the evaluated expressions were also validated through simulation results.

266

P. Singh Chauhan et al.

References 1. Lei H, Gao C, Ansari IS, Guo Y, Pan G, Qaraqe KA (2016) On physical-layer security over SIMO generalized-K fading channels. IEEE Trans Veh Technol 65(9):7780–7785 2. Lei H, Zhang H, Ansari IS, Pan G, Qaraqe KA (2016) Secrecy outage analysis for SIMO underlay cognitive radio networks over generalized-K fading channels. IEEE Signal Process Lett 23(8):1106–1110 3. Abo Rahama Y, Ismail MH, Hassan MS (2018) On the sum of independent fox’s H-function variates with applications. IEEE Trans Veh Technol 67(8):6752–6760 4. Ben Issaid C, Alouini M-S, Tempone R (2018) On the fast and precise evaluation of the outage probability of diversity receivers over α-μ, κ-μ, and η-μ fading channels. IEEE Trans Wirel Commun 17(2):1255–1268 5. Du H, Zhang J, Cheng J, Ai B (2020) Sum of Fisher-Snedecor F random variables and its applications. IEEE Open J Commun Soc 1:342–356 6. Perim V, Sánchez JDV, Filho JCSS (2020) Asymptotically exact approximations to generalized fading sum statistics. IEEE Trans Wirel Commun 19(1):205–217 7. Wang Z, Giannakis GB (2003) A simple and general parameterization quantifying performance in fading channels. IEEE Trans Commun 51(8):1389–1398 8. Peppas KP, Zamkotsian M, Lazarakis F, Cottis PG (2014) Asymptotic error performance analysis of spatial modulation under generalized fading. IEEE Wirel Commun Lett 3(4):421–424 9. Zhong C, Wong K-K, Jin S, Alouini M-S, Ratnarajah T (2011) Asymptotic analysis for Nakagami-m fading channels with relay selection. In: 2011 IEEE international conference on communications (ICC), Kyoto, Japan, pp 1–5. https://doi.org/10.1109/icc.2011.5963044 10. Illi E, El Bouanani F, Ayoub F (2017) Asymptotic analysis of underwater communication system subject to κ-μ shadowed fading channel. In: 13th international wireless communications and mobile computing conference (IWCMC),Valencia, Spain, pp 855–860. https://doi.org/10. 1109/IWCMC.2017.7986397 11. Wang X, Cheng W, Xu X (2020) On the exact and asymptotic analysis of wireless transmission over α − μ/inverse gamma composite fading channels. In: 2020 international conference on wireless communications and signal processing (WCSP), Nanjing, China, pp 789–794. https:// doi.org/10.1109/WCSP49889.2020.9299764 12. Chauhan PS, Kumar S, Upaddhyay VK et al (2021) Performance analysis of ED over airto-ground and ground-to-ground fading channels: a unified and exact solution. Int J Electron Commun 138:153839 13. Chauhan PS, Kumar S, Upaddhyay VK et al (2022) Generalised asymptotic frame-work for double shadowed κ − μ fading with application to wireless communication and diversity reception. Wirel Netw 28:1923–1934 14. Simmons N, da Silva CRN, Cotton SL, Sofotasios PC, Yacoub MD (2019) Double shadowing the rician fading model. IEEE Wirel Commun Lett 8(2):344–347 15. Simmons N, Silva CRND, Cotton SL, Sofotasios PC, Yoo SK, Yacoub MD (2020) On shadowing the κ − μ fading model. IEEE Access 8:120513–120536 16. Chauhan PS, Tiwari D, Soni SK (2017) New analytical expressions for the performance metrics of wireless communication system over Weibull/Lognormal composite fading. Int J Electron Commun 92:397–405 17. Gradshteyn IS, Ryzhik IM (2007) Table of integrals, series and products, 7th edn. Academic, San Diego, CA, USA 18. Simon MK, Alouini MS (2004) Digital communication over fading channels, 2nd edn. WileyIEEE Press, New York 19. Badarneh OS, Aloqlah MS (2016) Performance analysis of digital communication systems over α − η − μ fading channels. IEEE Trans Veh Technol 65(10):7972–7981 20. Prudnikov AP, Brychkov YA, Marichev OI (1986) Integrals and series volume 2: special functions, 1st ed. Gordon and Breach Science Publishers

Hjorth Parameters in Event-Related Potentials to Detect Minimal Hepatic Encephalopathy Luis Fernando Caporal-Montes de Oca, Ángel Daniel Santana-Vargas, Roberto Giovanni Ramírez-Chavarría, Khashayar Misaghian, Jesus Eduardo Lugo-Arce, and Argelia Pérez-Pacheco Abstract Minimal hepatic encephalopathy (MHE) is a subtle form of hepatic encephalopathy (HE), a reversible syndrome of impaired brain function occurring in patients with advanced liver dysfunction, often culminating in a hepatic coma and consequently death. Although most patients who develop HE first present the typical abnormalities of MHE, sometimes this stage goes unnoticed by psychometric tests focused on diagnosing it. Therefore, there is a need for the development of more efficient diagnostic tests to improve the patient’s quality of life and thus increase the prognosis for survival. The main objective of this study was to analyze the existence of parameters extracted from the P300 wave that are useful for differentiating between L. Fernando Caporal-Montes de Oca Facultad de Ciencias, Universidad Nacional Autónoma de México, 04510 Mexico City, México e-mail: [email protected] Á. Daniel Santana-Vargas Directorate of Research, Hospital General de México, Dr. Eduardo Liceaga, 06726 Mexico City, México R. Giovanni Ramírez-Chavarría Instituto de Ingenieria, Universidad Nacional Autónoma de México, 04510 Mexico City, México e-mail: [email protected] K. Misaghian · J. Eduardo Lugo-Arce (B) Faubert Lab, École d’optométrie, Université de Montréal, 3744 Jean Brillant, Montréal H3T 1P1, Québec, Canada e-mail: [email protected] Sage-Sentinel Smart Solutions, 1919-1 Tancha, Onna-son Kunigami-gun, Okinawa 904-0495, Japan J. Eduardo Lugo-Arce Facultad de Ciencias Físico-Matemáticas, Benemérita Universidad Autónoma de Puebla, Av. San Claudio y Av. 18 sur, Col. San Manuel Ciudad Universitaria, Puebla Pue 72570, México A. Pérez-Pacheco Unidad de Investigación y Desarollo Tecnológico (UIDT), Hospital General de México, Dr. Eduardo Liceaga, 06726 Mexico City, México e-mail: [email protected]

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_20

267

268

L. Fernando Caporal-Montes de Oca et al.

control subjects and cirrhotic patients with and without MHE in accordance with the diagnosis by Psychometric Score of Hepatic Encephalopathy (PSHE) and Critical Flicker Frequency (CFF). For this, the evoked potentials of 40 controls, 51 cirrhotic patients without MHE, and 20 with MHE were examined. Thirty statistical, temporal, frequency, and uncertainty indicators characteristics were extracted from the P300 wave. Statistically significant differences ( p < 0.05) were found in the Hjorth parameter of complexity and an evident reduction in the latency and amplitude of the evoked potentials in the groups studied. Keywords Minimal hepatic encephalopathy · Evoked potentials · Empirical decomposition into modes · Amplitude · Latency · Complexity · Mobility · Activity

1 Introduction Hepatic encephalopathy (HE) is a neuropsychiatric alteration in which the cognitive functioning of patients with liver disease is compromised [1]. The pathophysiology of HE is multifactorial, involving manganese (Mn) accumulation in the brain, and the damaging effect of ammonia (NH4) on glial cells [2]. HE is usually classified according to the underlying disease. Type A: due to acute liver failure; Type B: due to predominantly surgical portosystemic bypass or shunting; and Type C: due to cirrhosis. Traditionally, the severity of HE has been classified according to the West Haven criteria, proposed in 1977 [3], which divides severity into four levels according to the clinical manifestations. In grade I, patients show a lack of consciousness and attention and some subtle personality changes that are obvious to their relatives. In grade II, the most intriguing finding is disorientation. In grade III, patients are disoriented by place and situation and may exhibit bizarre behavior but respond to stimuli. In grade IV, patients are in a coma [4]. Although its authors already anticipated the existence of a grade 0 in which abnormalities could not be easily detected, it was not until 1998 when Schomerus and Hamster [5] formally coined the term minimal hepatic encephalopathy (MHE) to refer to the condition preceding the development of the clinical features of HE but without underestimating the importance of its detrimental effect on the quality of life of those who suffer from it [6]. This primary stage of HE is characterized by a subtle impairment of neurocognitive such as reduced motor response speed, decreased attention span, and slow processing speed [7, 8].

1.1 Diagnostic The Psychometric Hepatic Encephalopathy Score (PHES) is a valuable tool for diagnosing MHE worldwide. The test’s sensitivity is 96 % and the specificity is 100 % for diagnosing HE [4, 9]. The PHES battery is composed of five neuropsycholog-

Hjorth Parameters in Event-Related Potentials …

269

ical tests: the digit-symbol test (DST), number connection tests A and B (NCT-A, NCT-B), the serial dotting test (SDT), and the line-drawing test (LDT) [5, 7, 9]. Although the PHES offers an efficient diagnosis, the reality is that sometimes MHE goes unnoticed, so in the last two decades an effort has been made to standardize the diagnosis based on physiological tests, characterized by a higher level of accuracy and a lower degree of subjectivity. In 2003, the World Gastroenterology Organization through the Vienna Consensus considered cognitive tests, topographic mapping of brain electrical activity, and long-latency auditory evoked potentials (LAP300) as valuable resources for detecting MHE [7]. Since then, many investigations have focused on studying the potential of these methods; however, so far only the critical flicker frequency (CFF) (cut-off frequency at which a subject perceives a flickering light as continuous) has been consolidated as an efficient test for the diagnosis of EHM, as it is directly related to the level of alertness of the central nervous system [10].

1.2 Event-Related Potentials (ERP) Event-related brain potentials are described as changes in electrocortical activity recorded from the scalp and are evoked by an internal (cognitive or motor) or external (sensory) event [11, 12]; therefore, they are unrelated to the spontaneous activity of electroencephalography (EEG). Their importance lies in that they can provide information associated with cognitive processes in the brain when elicited by specific neuronal populations. The characteristics of the so-called cognitive potentials depend only on the subject of study, so they are considered endogenous components; however, they are evoked after presenting a visual or auditory stimulus, so they additionally contain exogenous components that depend on the nature of the stimulus and far-field components that are associated with a reflex of the peripheral nervous system receptors. Parameters of the endogenous components such as latency (time elapsed from stimulus onset to the maximum/minimum of some deflection) and amplitude (value of the electrical potential at the maximum/minimum of some deflection) have been studied to find differentiators between population groups and create new alternatives for the diagnosis of MHE. A relationship has been found between both latency and amplitude values, with processing speed and attention levels, respectively [13–15].

1.3 The P300 Wave and the Oddball Paradigm The P300 wave is a cognitive evoked potential that occurs when a subject detects a somatosensory, visual, or auditory stimulus [16]. The P300 name derives from showing a positive deflection with maximum amplitude at approximately 300 ms after stimulus onset and is found prominently over the parietal region. The P300

270

L. Fernando Caporal-Montes de Oca et al.

can be used as a physiological marker to evaluate the brain’s potential for information processing [17]. The P300 amplitude manifests central nervous system activity that reflects attention to incoming stimulus information, such that greater attention produces large P3 waves. Latency is a measure of the speed the brain sorts stimuli so that shorter latencies indicate superior mental performance relative to longer latencies [18]. Currently, the P300 is obtained using the oddball paradigm, where two stimuli, one of higher probability to occur than the other, are presented in a semi-random order. The subject is instructed to respond to the infrequent stimulus rather than the frequently presented or standard stimulus. Thus, potentials evoked by frequent stimuli will have far-field components, while infrequent stimuli will evoke potentials with far and near-field components. The latter will be related to recognition and discrimination between stimuli, so they are an expression of the subject’s level of attention and processing speed [19].

1.4 The Hjorth Parameters In 1970, B. Hjorth described a direct relationship between descriptor parameters of the amplitude/time pattern of the EEG signal with its frequency characteristics. These are called Hjorth parameters: activity, mobility, and complexity. Although this set of three statistical values calculated in the time-domain appears to be descriptors of amplitude, signal slope, and slope dispersion, they have a physical meaning in the frequency-domain [16]. The activity parameter represents the variance of the signal over time and is directly related to the signal power. Mobility represents the power spectrum’s average frequency or proportion of standard deviation. Finally, the complexity represents the change in frequency; it is defined as the ratio between the first derivative’s mobility and the signal’s mobility [20].

2 Methods 2.1 Study Population A total of 40 control subjects with no history of liver or neurological disease and 71 patients from the Liver Clinic of the Gastroenterology Service of the Hospital General de México “Dr. Eduardo Liceaga”, previously diagnosed with cirrhosis of different pathogenesis (alcoholic, hepatitis C virus, nonalcoholic steatohepatitis, primary biliary cholangitis, cryptogenic, and idiopathic), agreed to participate in the study. Table 1 shows the characteristics of the population studied.

Hjorth Parameters in Event-Related Potentials …

271

Table 1 Characteristics of the population studied Group

Age (years) Female (%) Education (years)

Control (n = 40) mean ± sd 47.72 ± 11.04 27 (67) 13.2 ± 3.6

MHE (n = 20) mean ± sd 57.75 ± 8.58 12 (60) 7.4 ± 4.0

Cirrhotic (n = 51) mean ± sd 53.8 ± 7.18 28 (55) 8.0 ± 3.5

In turn, the cases were divided into two groups, 51 cirrhotic patients without MHE and 20 cirrhotic patients with MHE diagnosed by PHES (considering -4 points as the limit defining the existence of MHE) and CFF (taking 39 Hz as the cut-off frequency for diagnosis) in concordance. The study was approved by Comité de Ética en Investigación del Hospital General de México “Dr. Eduardo Liceaga” (Reference DI/15/107/03/007).

2.2 Auditory Stimulation A descriptive study was performed to analyze and compare the evoked potentials of the controls with the patients in both case groups; the study includes non-randomized analysis sampling by quotas and prospective analysis time. The EEG signals of each participant were recorded while performing an auditory task in which stimuli were presented following the oddball evocation paradigm in its two-stimulus and threestimulus variants. The two-stimulus were presented with a 1000 Hz tone, duration of 70 ms, and 90 % probability of occurrence and a 2000 Hz tone with the same duration but 10 % probability of occurrence to which they had to give a motor response (push a button). In the three-stimulus study, a distractor tone composed of the white noise of the same duration and 10 % frequency of occurrence was added; the occurrence of the other two tones was reduced to 80 % and 10 %. The EEG signal was acquired with the SCAN version 4 program (Compumedics International, El Paso, Texas, USA), with a cap with electrodes on the scalp located according to the international 10-20 system. The Stim2 program (Compumedics International, El Paso, Texas, USA) was used for the emission of the stimuli. Both studies consisted of semi-random series of 300 stimuli each, with an inter-stimulus interval of 800 to 1200 ms to avoid habituation, a sampling frequency of 1000 Hz, a gain of 53.97 dB, and a Notch filter for 60 Hz noise. All electrodes were differentiated to the averaged ear lobes (Fig. 1).

272

L. Fernando Caporal-Montes de Oca et al.

Fig. 1 Location of the electrodes on which the feature analysis was centered (FZ, CZ, PZ) and the auxiliary electrodes (FP1 and FP2) used for the threshold artifact detection algorithm

2.3 Removal of Artifacts Manual removal of contaminated recordings by an excessive number of artifacts and distortions in the signal caused by the sudden detachment of the electrodes was carried out; also, digital filtering was implemented to eliminate the contribution of harmonic signals of 120 and 180 Hz. Since the EEG signal is mainly contaminated by ocular artifacts and taking advantage of the fact that its amplitude is about ten times larger than the electroencephalographic signal [21], an artifact detection algorithm was implemented based on an amplitude threshold in one of the two electrodes of the anterior frontal zone (FP1 or FP2 as they are the closest to the eyes). Once the artifact’s time at which it occurred was located, a correction was performed in a window of 1000 ms, where empirical mode decomposition (EMD) [22] was applied to remove the intrinsic mode functions where the contributions of the artifacts were found (Fig. 2).

2.4 Segmentation, Baseline Correction, and Averaging Once removed the artifacts in all signals, a segmentation of 1100 ms epochs associated with each stimulus was performed. The first 100 ms were taken before the stimulus onset and served to correct each epoch by centering it at 0 μV from the following eq. 1  E j, Ei = Ei − 100 j=1 100

(1)

Hjorth Parameters in Event-Related Potentials …

273

Fig. 2 a EEG study segment in the anterior frontal zone (FP1 and FP2) and midline (FZ, CZ, and PZ) electrode; in blue, the maximum amplitude of the ocular artifacts is indicated. b The same segment was corrected after removing intrinsic mode functions four and five at electrodes FZ, CZ and five at electrode PZ

where E i is the EEG signal’s amplitude value at the epoch’s i-th ms. One of the significant difficulties in studying event-related potentials (ERPs) is that their amplitude is an order of magnitude below the regular EEG activity, so that when they are independent, the EEG signal introduces a noise in ERPs called cephalic noise. To eliminate its contribution and to be able to detect ERPs, it was necessary to perform averaging of epochs at the population group level. This method is based on the strong assumption that the ERP is deterministic, repeatable, and independent of the EEG [23] (Figs. 3 and 4).

2.5 Feature Extraction of the P300 Wave Group epoch averaging served to identify that in general the onset of the N200 deflection and the end of the P300 wave are within 200 and 500 ms after stimulus onset, so this analysis window was selected to extract scalar features at the epochaveraged level per subject, focusing the study on the PREs of target stimuli at the FZ, CZ, and PZ centerline electrodes.

274

L. Fernando Caporal-Montes de Oca et al.

Fig. 3 Two-stimulus study, an average of epochs for frequent (left) and target (right) stimuli in the different population groups. FZ electrode

Fig. 4 Three-stimulus study, an average of epochs for frequent stimuli (left), targets (right), and distractor (bottom) in the different population groups. FZ electrode

Hjorth Parameters in Event-Related Potentials …

275

Table 2 Statistical and temporal characteristics extracted from the average of individual epochs Characteristic 200-500 ms window N200-P300 window Activity Mobility Complexity Kurtosis Shannon entropy Rényi entropy Electrode 1 - Electrode 2 Area Average amplitude Deflection latency N200 Deflection amplitude N200 Deflection latency P300 Deflection amplitude P300

✗ ✗ ✗ ✗ ✗ ✗ ✗

✗ ✗ ✗ ✗ ✗ ✗ ✗ ✗ ✗

Invariant Invariant Invariant Invariant

In addition, a variable window was used for everyone between deflections N200 and P300. The same characteristics were extracted, adding the mean amplitude and the area under the curve between both points. The parameter labeled “Electrode 1 - Electrode 2” corresponds to the area between the potential curve of two different electrodes, always considering as a subtrahend the one farthest from the frontal zone, since it is well known that at P300 the amplitude is maximum in that zone and decreases along the midline when reaching the parietal zone [24] so this parameter can be interpreted as an indicator of the cohesion between waves in different brain zones (Table 2).

2.6 Statistical Analysis Fisher’s analysis of variance (ANOVA) was performed to determine which of the extracted characteristics offered a greater capacity to distinguish between population groups (considering a significance level of 0.05). Since the acquisition was made simultaneously in all electrodes, only those with a p-value below the significance level in the three midline electrodes were considered discriminatory characteristics.

276

L. Fernando Caporal-Montes de Oca et al.

3 Results The ANOVA analysis showed that for the three-stimulus study, five characteristics showed statistically significant differences at least between two groups, three of them being indicators of the statistical properties of the ERPs (complexity, mobility, and kurtosis) and the other two temporal parameters (latency of the N200 deflection and latency of the P300 deflection), all of them within the 200–500 ms range. For the two-stimulus study, the features that showed statistically significant differences at least between two population groups were the same as for the three-stimulus study, the amplitude of the P300 deflection and the complexity and mobility values in the N2-P3 window (Tables 3 and 4).

4 Discussion The results show statistically significant differences in the latency value of the P300 wave between the control group and the MHE group. Latency is a parameter traditionally associated with processing speed. These results are consistent with the fact that individuals in the MHE group have prolonged latencies than those in the control group. However, in both study modalities, these differences were only found between these groups, a parameter unable to differentiate subjects of the control group from the cirrhotic group, and the cirrhotic group from those of the MHE group (Tables 5, 6 and 7).

Table 3 Results of statistical analysis between population groups, two-stimulus study. Characteristics with p-values < α in the three electrodes Characteristic Groups p(FZ) p(CZ) p(PZ) Complexity

Mobility

N200 latency

P300 latency P300 amplitude

Control - MHE Control Cirrhotic Cirrhotic - MHE Control - MHE Control Cirrhotic Control - MHE Control Cirrhotic Control - MHE Control - MHE Control Cirrhotic

1.4E-6 1.1E-3

5.7E-6 1.8E-3

1.1E-4 2.5E-3

7.7E-3 4.9E-6 1.2E-4

4.4E-5 7.0E-9 2.4E-5

1.5E-2 3.4E-8 2.0E-4

1.1E-3 3.0E-4

4.0E-4 1.0E-4

2.1E-4 4.2E-5

2.9E-3 4.8E-3 4.1E-5

7.9E-3 2.4E-3 3.0E-4

1.9E-2 3.7E-3 6.0E-4

Hjorth Parameters in Event-Related Potentials …

277

Table 4 Results of statistical analysis between population groups, three-stimulus study. Characteristics with p-values < α in the three electrodes Characteristic Groups p(FZ) p(CZ) p(PZ) Complexity

Mobility

N200 latency

P300 latency

Control - MHE Control Cirrhotic Cirrhotic - MHE Control - MHE Control Cirrhotic Control - MHE Control Cirrhotic Control - MHE

4.9E-5 9.1E-4

1.2E-5 1.8E-2

2.7E-9 4.5E-2

2.6E-2 3.0E-4 2.9E-3

4.0E-4 2.9E-5 5.1E-3

3.2E-6 4.6E-6 1.9E-3

9.5E-5 4.8E-3

3.0E-4 7.1E-3

1.6E-2 3.3E-2

9.5E-3

3.0E-4

3.5E-3

Table 5 Summary statistics of P300 deflection latency values of the groups Study Group μ (FZ) μ (CZ) Two stimuli

Three stimuli

Control Cirrhotic MHE Control Cirrhotic MHE

345.7 ms 369.5 ms 388.5 ms 344.2 ms 375.0 ms 375.0 ms

349.1 ms 367.7 ms 388.9 ms 343.2 ms 371.6 ms 378.5 ms

Table 6 Summary statistics of P300 deflection amplitude values Study Group μ (FZ) μ (CZ) Two stimuli

Control Cirrhotic MHE

7.33 μV 3.78 μV 4.15 μV

7.29 μV 4.23 μV 4.02 μV

Table 7 Summary statistics of the complexity values in both studies Study Group μ (FZ) μ (CZ) Two stimuli

Three stimuli

Control Cirrhotic MHE Control Cirrhotic MHE

12.6 9.76 7.25 10.0 7.76 6.01

13.7 10.2 6.19 9.72 8.65 5.26

μ (PZ) 344.1 ms 359.6 ms 379.3 ms 343.3 ms 372.8 ms 379.0 ms

μ (PZ) 6.06 μV 3.74 μV 3.49 μV

μ (PZ) 12.1 9.37 7.04 8.90 7.57 4.66

278

L. Fernando Caporal-Montes de Oca et al.

A similar result was obtained for the latency of the N200 wave, while the amplitude of P300, a parameter associated with the individual’s level of attention, only showed statistically significant differences in the study of two stimuli. These results would indicate that subjects in the control group present higher levels of attention to identifying target stimuli than both groups of cases; however, again it does not result in a parameter with the potential to anticipate the development of MHE in patients with cirrhosis. Finally, concerning Hjorth’s parameters, complexity showed better results than mobility parameter in distinguishing between case groups. Furthermore, complexity and mobility showed statistically significant differences in the three combinations of groups on which Fisher’s analysis was performed. Since, in the frequency-domain, mobility is equivalent to the weighted average frequency, the complexity corresponds to the ratio of the average frequency of the first derivative to the average frequency of the original signal. For the particular case, when complexity = 1, it occurs with a discrete spectrum, i.e., when the signal is a pure sine or cosine; on the contrary, as the complexity value increases, the original signal must be composed of a more significant number of instantaneous frequencies. This last result implies a complexity decrease in evoked ERP from patients with cirrhosis and, to a greater extent, those who have already developed MHE. This result aligns with other reports in the literature concerning the slowing of waves due to the worsening processing speed [18].

5 Conclusions This work analyzed auditory evoked potentials using the oddball paradigm of two and three stimuli in a control group and cirrhotic patients with and without minimal hepatic encephalopathy (MHE). We emphasize the potential of using the empirical mode decomposition to correct offline electroencephalographic recordings, which can undoubtedly be enhanced with the simultaneous acquisition of oculogram and electrocardiogram, resulting in an efficient, easy-to-use, and low computational cost correction. Complexity was Hjorth’s parameter that showed the greatest statistically significant differences (p < 0.05) between the groups. MHE and non-MHE groups showed an evident increase in latency and considerable reductions in the amplitude of evoked potentials than the control group. This prolonged latency in the P300 waves is due to the impairment of the processing speed; however, it only showed statistically significant differences between MHE and control groups. The results of this work are promising since the Hjorth complexity parameter could be a good indicator to detect MHE in cirrhotic patients from knowing the relationship between MHE and HE in development. However, we recognize that this study is limited by the possibility that variables such as age, sex, and educational level could act as confounding variables, which is why we recommend using corrections or improving the sampling of the population.

Hjorth Parameters in Event-Related Potentials …

279

References 1. Häussinger D, Dhiman RK, Felipo V, Görg B, Jalan R, Kircheis G, Merli M, Montagnese S, Romero-Gomez M, Schnitzler A et al (2022) Hepatic encephalopathy. Nat Rev Dis Primers 8(1):1–22 2. Idrovo V (2003) Encefalopatía hepática. Rev Col Gastroenterol 18(3):20–23 3. Conn H, Leevy C, Vlahcevic Z, Rodgers J, Maddrey W, Seeff L, Levy L (1977) Comparison of lactulose and neomycin in the treatment of chronic portal-systemic encephalopathy: a double blind controlled trial. Gastroenterology 72(4):573–583 4. Weissenborn K (2019) Hepatic encephalopathy: definition, clinical grading and diagnostic principles. Drugs 79(1):5–9 5. Schomerus H, Hamster W (1998) Neuropsychological aspects of portal-systemic encephalopathy. Metab Brain Dis 13(4):361–377 6. Amodio P, Montagnese S, Gatta A, Morgan MY (2004) Characteristics of minimal hepatic encephalopathy. Metab Brain Dis 19(3):253–267 7. Torres DS, Abrantes J, Brandão-Mello CE (2020) Cognitive and neurophysiological assessment of patients with minimal hepatic encephalopathy in brazil. Sci Rep 10(1):1–13 8. Weissenborn K, Giewekemeyer K, Heidenreich S, Bokemeyer M, Berding G, Ahl B (2005) Attention, memory, and cognitive function in hepatic encephalopathy. Metab Brain Dis 20(4):359–367 9. Weissenborn K, Ennen JC, Schomerus H, Rückert N, Hecker H (2001) Neuropsychological characterization of hepatic encephalopathy. J Hepatol 34(5):768–773 10. Hontañón V, González-García J, Rubio-Martín R, Díez C, Serrano-Morago L, Berenguer J, de Estudio ESCORIAL G et al (2022) Efecto de la erradicación del vhc sobre la frecuencia crítica de parpadeo en pacientes coinfectados por vih/vhc con cirrosis avanzada. Revista Clínica Española 11. Bressler SL, Ding M (2006) Event-related potentials. Wiley encyclopedia of biomedical engineering 12. Sokhadze EM, Casanova MF, Casanova EL, Lamina E, Kelly DP, Khachidze I (2017) Eventrelated potentials (erp) in cognitive neuroscience research and applications. NeuroRegulation 4(1):14–14 13. Kok A (2001) On the utility of p3 amplitude as a measure of processing capacity. Psychophysiology 38(3):557–577 14. Polich J (1987) Task difficulty, probability, and inter-stimulus interval as determinants of p300 from auditory stimuli. Electroencephalogr Clin Neurophysiol/Evoked Potentials Sect 68(4):311–320 15. Fuenmayor G, Villasmil Y (2008) La percepción, la atención y la memoria como procesos cognitivos utilizados para la comprensión textual. Revista de artes y humanidades UNICA 9(22):187–202 16. Sutton S, Braren M, Zubin J, John E (1965) Evoked-potential correlates of stimulus uncertainty. Science 150(3700):1187–1188 17. Li F, Yi C, Jiang Y, Liao Y, Si Y, Dai J, Yao D, Zhang Y, Xu P (2019) Different contexts in the oddball paradigm induce distinct brain networks in generating the p300. Front Hum Neurosci 12:520 18. Picton TW (1992) The p300 wave of the human event-related potential. J Clin Neurophysiol 9(4):456–479 19. Sur S, Sinha VK (2009) Event-related potential: an overview. Ind Psychiatry J 18(1):70 20. Hjorth B (1973) The physical significance of time domain descriptors in eeg analysis. Electroencephalogr Clin Neurophysiol 34(3):321–325 21. Nazarpour K, Mohseni HR, Hesse CW, Chambers JA, Sanei S (2008) A novel semiblind signal extraction approach for the removal of eye-blink artifact from eegs. EURASIP J Adv Signal Process 2008:1–12

280

L. Fernando Caporal-Montes de Oca et al.

22. Huang NE, Shen Z, Long SR, Wu MC, Shih HH, Zheng Q, Yen NC, Tung CC, Liu HH (1998) The empirical mode decomposition and the hilbert spectrum for nonlinear and non-stationary time series analysis. Proc R Soc Lond Ser A Math Phys Eng Sci 454(1971):903–995 23. Blinowska K, Durka P (2006) Electroencephalography (EEG). Wiley encyclopedia of biomedical engineering. John Wiley & Sons, Inc. (Copyright & 2006) 24. Johnson R Jr (1993) On the neural generators of the p300 component of the event-related potential. Psychophysiology 30(1):90–97

The V-Band Substrate Integrated Waveguide Antenna for MM Wave Application Shailendra Kumar Sinha, Raghvendra Singh, and Himanshu Katiyar

Abstract To develop IoT-based devices for high-speed data transfer, SIW technology may be used. Presently, nowadays 5G innovation is a promising innovation for IoT. In mm wave technology, SIW has an advantage over traditional waveguide because of its qualities and simplicity of structure. In this article, SIW antenna is designed on RT/duroid 5870 with permittivity 2.33 and 0.0012 Loss tangent. The operating frequency is 75 GHz. The antenna operates at 75 GHz with a Gain of 15.0588 and peak Directivity of 16.6233 with a radiation efficiency of 90.56%. Keywords IOT · Bandwidth · SIW cavity · Slot antenna · Resonant frequency · Dual frequency

1 Introduction A SIW is an advanced rectangular waveguide by close arranging metallized vias that Connect two metal plates placed parallel to each other. The SIW is an advanced technology for mm wave application. It has been introduced in the year of 1994 [1]. Different types of substrate integrated waveguide devices are available like filters, coupler, antenna, and oscillator [2]. Important parameters for a SIW slot antenna to work at resonant frequency are antenna width (w), radius of SIW antenna (r), and separation between two vias (s). The cutoff frequencies of SIW antenna depend upon w (width of the antenna). The separation between two vias and diameter is the key feature that differentiates SIW antenna from classical rectangular waveguide. The SIW can be converted into multi-band antenna by cutting the slot appropriately. There are various Slot antennae available that operate in various bands of frequency. S. K. Sinha · R. Singh (B) Department of Electronics and Communication Engineering, Pranveer Singh Institute of Technology, Kanpur, India e-mail: [email protected] H. Katiyar Department of Electronics Engineering, Rajkiya Engineering College, Sonbhadra, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_21

281

282

S. K. Sinha et al.

Apart from various advantages, there is a disadvantage of having multi-directional radiation that limited its applications. At high frequencies, the microstrips and various other co-planar devices are not reliable as they consist of high losses [3]. To overcome these losses, SIW technology can be used as this is reliable and consistent with high frequencies. In recent years, multi-band antennae are considered as a good contender to understand the millimeter wave application of wireless technology [4, 5]. Impedance mismatching is one of the major concerns in wide range frequency at millimeter wave band. In this paper, the tapered profile is used to improve the impedance matching problem [6–11]. The addition of SIW technology and tapering profile makes the structure more reliable [12, 13]. The classical rectangular waveguide and SIW technology have comparable attributes except for modes of propagation. The Classical rectangular waveguide consists of Transfer Electric (TE) mode and Transfer Magnetic (TM) mode while SIW supports only Transfer Electric (TE) mode. The mathematical expression of frequency and width of SIW antenna (w) is given by Eqs. 1 and 2 for TE10 [14–20]. √ f c (T E 10 ) = Co /2 (w − d 2 /0.95s)

(1)

we f f = w − d 2 /0.95s

(2)

In this paper, V-Band SIW Antenna is designed that operates in millimeter band of frequency which is used in radar and various other scientific research. The designed V-Band Antenna is based on SIW structure with slots on the upper plate. In substrate integrated waveguide antenna, radiation losses occur if the separation between two vias is not proper while in waveguide, radiation leakage occurs if the thickness of the wall is not greater than the skin depth.

2 Design of Suggested SIW Antenna The SIW antenna is characterized by its parameters vias separation (s), vias diameters (d), and SIW width (w). The separation between two vias is based on the structure of the waveguide and the characteristics of the substrate. In this paper, the SIW antenna is designed by putting the vias in place of side wall of a rectangular waveguide. Figure 1 displays the structure of the designed SIW antenna. To remove the impedance mismatching problem, the tapering profile at ports is used. The geometry of the proposed antenna is taken from [6]. The SIW antennas can operate on more than one band depending on the slot. The Slots are radiant elements that are removed on a metallic plate from the cavity of the substrate integrated waveguide (SIW). The operating frequency can vary depending upon the position of the slot on the antenna.

The V-Band Substrate Integrated Waveguide Antenna …

283

Fig. 1 a 2-Dimensional view of suggested SIW Antenna. b 3-Dimensional view of suggested proposed SIW Antenna Table 1 The dimensions of V-band SIW antenna

Definitions

Value (mm)

Width of substrate Length of substrate Slot1 width Slot1 length Width of the slot2 Slot2 length Vias diameter Vias height Distance between two vias

16 52 8 2 4 2 1.5 0.458 2.4

Figure 1a, b shows two- and three-dimensional model of the proposed SIW Antenna. The main parameters like substrate height, length, and width are listed in Table 1. The distance between two consecutive vias (s) in the proposed antenna is more than vias diameter (d). The value of s should be less than two times of vias diameter to overcome radiation losses. The ratio of cutoff wavelength and s should be larger than 0.05 to remove over-performance.

284

S. K. Sinha et al.

3 Discussion and Results 3.1 S11 (Reflection Coefficient) of the Proposed SIW Antenna The suggested antenna is simulated using HFSS software. The Minimum value in Fig. 2 is representing the single frequency band of SIW antenna [12]. The insertion of slot decreases the value of reflection coefficient S11 [3]. Figure 1 displays S11 (reflection coefficient) of the proposed antenna. The suggested SIW antenna resonates at 75 GHz. The recommended antenna radiation effectiveness is 90.58%. The reflection coefficient at 75 GHz is −19.48 dB. Figure 2 describes changes in reflection coefficient at various frequencies. The designed SIW Antenna shows excellent performance at 75 GHz frequency. The bandwidth of the proposed V-Band Antenna is around 15 GHz.

3.2 Directivity of the Proposed V-Band Substrate Integrated Waveguide Antenna The directivity of any antenna describes the power pattern in the given direction. The proposed V-Band Substrate Integrated Waveguide Antenna operates in two bands. Figure 3 describes the directivity of the proposed SIW Antenna. The maximum directivity of the proposed SIW Antenna is 16.62 (Fig. 4).

Fig. 2 Reflection coefficient proposed SIW Antenna

The V-Band Substrate Integrated Waveguide Antenna …

Fig. 3 The directivity of the proposed V-Band substrate integrated waveguide antenna Fig. 4 The gain of the proposed V-Band substrate integrated waveguide antenna

285

286

S. K. Sinha et al.

3.3 H-Plane and E-Plane Pattern of the Proposed SIW Antenna The proposed antenna radiation pattern explains the unidirectional property of the antenna. The antenna radiation pattern also shows how the antenna works. Figure 5 displays the proposed V-Band slot Antenna has unidirectional radiation.

Fig. 5 a E-plane pattern of suggested SIW Antenna. b H-plane of suggested SIW Antenna

The V-Band Substrate Integrated Waveguide Antenna …

287

4 Conclusion The proposed Antenna operates as a dual-band antenna. To achieve multi-band performance, slot antenna technology is used. The operating of this antenna is 75 GHz. The −19.48 dB reflection coefficient is achieved at 75 GHz with efficiency of 91%. The Peak Gain and peak directivity of the proposed Antenna are 15.06 and 16.62 dB. The directivity and gain can be improved by relocating vias. This designed SIW Antenna is a good technology for mm wave and fifth-generation technology.

References 1. Shigeki F et al (1994) Waveguide line. Japan Patent 06(053):711 2. Deslandes D et al (2006) Accurate modeling, wave mechanisms, and design considerations of a substrate integrated waveguide. IEEE Trans Microwave Theory Techn 54(6) 3. Hamid N et al (2020) Dual band SIW slot antenna for 5G applications. ICATAS-MJJIC 4. AL-Fadhali N et al, SIW cavity backed frequency reconfiguration antenna for cognitive radio applies to internet of things applications. Int J RF Microwave Willey 5. Mukherjee S et al, Substrate integrated waveguide cavity backed slot antenna for dual-frequency application. In: Proceedings of the 44th European microwave conference 6. Nair PS et al (2008) Millimeter wave antenna for 5G applications. In: IEEE conference on antenna & wave propagation, 978-1-5386-7070-6/18 7. Ashraf N et al (2014) A DR loaded SIW antenna for 60 GHz high speed wireless communication systems. Int J Antennas Propag 8. Khalichi B et al (2015) Designing wide band tapered-slot antennas. IEEE Antennas Propag Mag 9. Ashraf et al (2013) Substrate integrated wave-guide antennas/array for 60 GHz wireless communication systems. In: RF and microwave conference (RFM), 2013 IEEE international, pp 56, 61 10. Hong W (2014) Study and prototyping of practically large scale mm Wave antenna system for 5G cellular devices. IEEE Commun Mag 52(9):63–69 11. Kordiboroujeni Z et al (2013) Designing the width of substrate integrated waveguide structures. IEEE Microw Wirel Compon Lett 23(10) 12. Deslandes D et al (2010) Design equations for tapered microstrip-to-substrate integrated waveguide transitions. In: Proceeding of the IEEE MTT-S international microwave symposium, pp 704707 13. Wang H (2010) Dielectric loaded SIW H-plane horn antennas. IEEE Trans Antennas Propag 58(3):640–647 14. Oraizi H (2003) Optimum design of tapered slot antenna profile. IEEE Trans Antennas Propag 51(8):1987–1995 15. Deslandes D et al (2010) Design equations for tapered microstrip-to-SIW transitions. In: Proceeding of the IEEE MTT-S international microwave symposium, pp 704–707 16. Xu F (2005) Guided-wave and leakage characteristics of SIW. IEEE Trans Microw Theory Tech 53(1):66–72 17. Bozzi M et al (2011) Review of Substrate-integrated waveguide circuits and antennas. IET Microw Antennas Propag 5(8):909–920

288

S. K. Sinha et al.

18. Chen X et al (2005) SIW linear phase filter. IEEE Microw. Wireless Compon Lett 15(11): 787–789. https://doi.org/10.1109/LMWC.2005.859021 19. Singh R et al (2019) Performance investigations of multi-resonance microstrip patch antenna. Soft Comput: Theories Appl 20. Singh R et al, In body communication: assessment of multiple homogeneous human tissue models on stacked meandered patch antenna. Int J Appl Evolut Comput (IJAEC)

The Influence of an Extended Optical Mode on the Performance of Microcavity Forced Oscillator H. Avalos-Sánchez, E. Y. Hernández-Méndez, E. Nieto-Ruiz, A. J. Carmona, M. A. Palomino-Ovando, M. Toledo-Solano, Khashayar Misaghian, Jocelyn Faubert, and J. Eduardo Lugo

Abstract In this work, we studied theoretically and experimentally the induction of electromagnetic forces in one-dimensional photonic crystals when light impinges with a TM polarization. The photonic structure consists of an optical microcavity formed of two one-dimensional photonic crystals made of free-standing porous silicon, separated by a variable air gap. The working wavelength is the extended optical mode at 633 nm. We show experimental evidence for the force induced when the photonic structure can make forced oscillations. We measured peak displacements and velocities ranging from 1.97 up to 4.49 microns and 0.93 up to 1.27 mm/s for external frequencies 75 and 45 Hz, respectively. The light power was 45 mW with an angle of incidence of 5◦ . Keywords Light propagation · Electromagnetic forces · Photonic oscillator · Extended mode

H. Avalos-Sánchez · E. Y. Hernández-Méndez · E. Nieto-Ruiz · A. J. Carmona · M. A. Palomino-Ovando · J. E. Lugo (B) Facultad de Ciencias Físico-Matemáticas, Benemérita Universidad Autónoma de Puebla, Av. San Claudio y Av. 18 sur, Col. San Manuel Ciudad Universitaria, Puebla Pue 72570, Mexico e-mail: [email protected] M. Toledo-Solano CONACYT-Facultad de Ciencias Físico-Matemáticas, Benemérita Universidad Autónoma de Puebla, Av. San Claudio y Av. 18 sur, Col. San Manuel Ciudad Universitaria, Puebla Pue 72570, Mexico e-mail: [email protected] K. Misaghian · J. Faubert · J. E. Lugo Faubert Lab, Ecole d’optométrie, Université de Montréal, Montreal, QC H3T1P1, Canada Sage-Sentinel Smart Solutions, 1919-1 Tancha, Onna-son Kunigami-gun, Okinawa 904-0495, Japan © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_22

289

290

H. Avalos-Sánchez et al.

1 Introduction Nowadays, advances in optical microcavities (MCs) manifest themselves in both fundamental and applied research, thanks to their high-quality factors and small mode volumes, enabling significant enhancement of light-matter interactions [1]. Among the advances in MCs, nonlinear photonics [2], quantum cavity electrodynamics [3], cavity optomechanics [4], and microlasers stand out [5]. However, MCs can be configured for various chemical or biomolecular sensing applications [6, 7]. Also, MCs enhanced light absorption in organic solar cells and widely improved device performance [8]. On the other hand, resonance effects of confined light in MCs can be used to increase the radiation pressure force for electromagnetic-wave-driven micro-motors [9]. The concept of radiation pressure has been used to manipulate micro-objects and biological organisms. Optical traps are based on the radiation pressure force resulting from photon momentum transfer; they include the early optical levitation configurations and the optical tweezers [10, 11]. Recently, a direct-acting micromechanical resonator has been developed using only the force provided by radiation pressure [12]. In optomechanics, the MCs usually use the radiation pressure as a light-matter coupling mechanism, and the optical gradient force as a driving force, which strongly depends on the magnitude of the resonator displacement with the laser intensity [13, 14]. Furthermore, radiation pressure in microcavity photonics has shown experimental evidence that this structure is capable of self-oscillations and forced oscillations [15–17]. The present work is organized as follows: in the second section, we briefly describe the theory of inducing an electromagnetic force in an optical microcavity. Also, the dynamical models for the mechanical auto and forced oscillations are analyzed. In the third section, we present the experimental details of the optical microcavity. Finally, in section fourth, we present and discuss the results of the oscillation measurements of optical microcavity at the extended optical mode wavelength at 633 nm.

2 Theorical Formalism We consider a optical MC composed of two of porous silicon one-dimensional photonic crystals (PSi-1DPC) of 15 periods Λ = d1 + d2 , occupying the regions 0 < z < 15Λ and 15Λ + air gap < z < air gap + 30Λ. Between the 1DPC, a cavity is formed by an air gap (see Fig. 1). The microcavity is embedded in air, and a monochromatic electromagnetic plane wave (TM polarization) is incident on their surface at z = z 0 . The magnetic and electric fields can be expressed throughout the structure as [18]

The Influence of an Extended Optical Mode …

291

Fig. 1 The distribution of the electric field in the optical MC of the extended optical mode at 633 nm with TM polarization. See text for explanations

  E l = − cos θl Al eikzl (z−zl ) + Bl e−ikzl (z−zl ) ei(kx x−ωt) xˆ   + sin θl Al eikzl (z−zl ) − Bl e−ikzl (z−zl ) ei(kx x−ωt) zˆ  kl  ikz (z−zl ) Al e l − Bl e−ikzl (z−zl ) ei(kx x−ωt) yˆ , Hl = − ωμ0

(1)

where zl < z < zl+1 for l = 0, 1, . . . , N (N is the number of layers), and where √ k zl = kl cos θl , k x = k sin θ0 , k = ω/c, kl = l k, ω is the frequency, c the velocity √ of light in vacuum, l the relative permittivity of the l-dielectric, and θl the angle of incidence on the l-layer in the x-y plane.

2.1 Lorentz Force Densities Specifically, for time-harmonic fields of time-dependence e−iωt , the time-average force density is given by [19]  f =

 1  Re iω0 μ0 ( − 1)E ∗ × H . 2

(2)

Evaluation of the time-averaged volume force densities of the TM mode is then performed according to Eq. (1). The transverse component is given by

292

H. Avalos-Sánchez et al.

 fz  =

 1  Re iω0 μ0 ( − 1)E x∗ Hy . 2

(3)

The longitudinal component of the volume force density is written as  fx  =

 1  Re −iω0 μ0 ( − 1)Hy E z∗ 2

(4)

and disappears because it is the realpart  of an imaginary quantity. The third component of the volume force density is f y = 0 for TM modes. The surface force densities [19] are computed using the time-average of Fz,l

 l 2 1 = 0 − 1 (E z,l )2 , 2 l+1

(5)

so that the total pressure is the sum of all transverse force densities, and explicitly, 

N N 

 

Fz,T = Fz,l + l=0

l=1



zl

 f z (z) dz,

(6)

zl−1

where zl denotes the boundary between layer l and layer l + 1 (see Fig. 1). Using Eq. (1), we can rewrite the transverse component of the volume force density as  f z,l = −0 (l − 1)k zl |Al ||Bl | sin(φ Al − φ Bl + 2φl ).

(7)

The complex amplitudes Al and Bl , written in the form Al = |Al |eiφ Al , Bl = |Bl |eiφ Bl ,

(8)

together with their phases φ Al ,Bl , can be calculated using the well-known transfer matrix method [18]. The phase φl is given by φl = k zl (z − zl ).

(9)

The solution of the integral of Eq. (7) can be written in the following form:

  0 (l − 1)|Al ||Bl | cos(ϕ Al − ϕ Bl ) − cos(ϕ Al − ϕ Bl − 2k zl dl ) 2 zl−1 (10) where dl = zl − zl−1 . On the other hand, the net surface force acting on the interfaces between the crystal layers is obtained by temporally averaging Fz,l in each of them and can be written as zl

 f z dz =

The Influence of an Extended Optical Mode …

293

  1  l 2   Fz,l = 0 − 1 sin2 θl |Al |2 + |Bl |2 − 2 |Al | |Bl | cos(φ Al − φ Bl ) . 4 l+1 (11) Finally, for lossless dielectrics, the force density only exists in the z-direction and is given by 



 N 

  l 2 0 − 1 |Al |2 + |Bl |2 − 2|Al ||Bl | cos(φ Al − φ Bl ) Fz,T = sin2 θl 4 l+1 l=0 N

  0 (l − 1) |Al ||Bl | cos(φ Al − φ Bl ) − cos(φ Al − φ Bl − 2k zl dl ) , 2 l=1

(12) where 0 is the vacuum permittivity and θ0 is the angle of incidence of the air region.

2.2 Mechanical Model for Mechanical Oscillations The self-oscillations and forced oscillations in a photonic microcavity made of porous silicon one-dimensional photonic crystals (PSi-1DPC) with an air gap have already been discussed recently [15–17]. Under incident light, the microcavity has been considered a straightforward oscillating system that can produce either self or forced oscillation, just like a pendulum in a viscous frictional medium acted upon by a force of constant magnitude [15, 20, 21]. The differential equation of this dynamical system is z¨ + 2h z˙ + ω02 z = az T , z¨ + 2h z˙ +

ω02 z

=0,

j T < t < (n light + j)T, (n light + j)T < t < ( j + 1)T, j = 0, . . . , m,

(13)

where az T = Fz T A/m psi , m psi and A are the mass and active surface area of the structure, h is a damping coefficient, ω0 is the natural frequency of the system, n light defines the duty cycle (fraction of the period where the light is on) that should take a value of 0.5 for the auto-oscillation case, and j + 1 is the number of cycles that the light is on and off. The period T is related to the oscillator’s frequency ω as usual by T = 2π/ω, and it is related to the natural frequency and damping coefficient as ω2 = ω02 − h 2 . In Eq. (13), the known parameters of the experiment are t, n light , A, and m psi , while az T can be determined from theoretical calculations and computer simulations. Parameters ω0 and h can be known by the self-oscillation condition [15]. In this condition, it is possible to write

294

H. Avalos-Sánchez et al.

2h

T /2 0

z˙ dt =

T /2

2

0

az T z˙ dt =

T /2 0

az T z˙ dz ,

(14)

for t ∈ [0, T /2]. This condition can be approximated without knowing the exact solution of Eq. (13). Using the maximum values for the displacement z P and velocity V P , the auto-oscillation condition reads 2hV 2 T /2 = az T z P . P

(15)

3 Photonic Microcavity The optical MC consisted of two PSi-1DPCs with an air gap (see Fig. 1). The 1DPCs were synthesized following the methodology reported at [15, 16]. The refractive √ √ indices are 1 = n 1 = 1.37, 2 = n 2 = 2.07, and the d values of the thicknesses were determined from n 1 d1 = n 2 d2 = λ0 /4, where λ0 = 900 nm. On the other hand, the optical MC was fabricated in a double juxtaposed cantilever configuration (see Fig. 2 of [17]). First, a PSi-1DPC was placed over a flat glass substrate. Second, to place a second PSi-1DPC, a spacer was used to compensate for the thickness due to the already placed PSi-1DPC. Finally, the second PSi-1DPC was placed on top of the first to form the final structure. Thus, there were two Psi 1D-PCs in a mirror-like symmetry with a gap space between them. Due to the type of

Fig. 2 Theoretical force density as a function of the air gap. Maxima and minima of the force can be observed when the air gap is equal to some multiple of the wavelength

The Influence of an Extended Optical Mode …

295

configuration, the separation between both mirrors cannot be controlled. To estimate the parameter az T , we consider the surface area of the photonic device to be 3 mm2 and average mass m psi has a value of 2.67568×10−8 Kg [15].

4 Results and Discussion From Eq. (12), we calculate the electromagnetic force induced on the optical MC of Fig. 1 at an angle of incidence θ0 of 5◦ . We take the average value between the maximum and minimum values (see Fig. 2) for Fz T that equals 0.645×10−6 N/m2 . Since A = 3 mm2 and m psi = 2.675×10−8 Kg, the value for az T is 7.24×10−5 m/s2 . Figure 3 shows the optical MC’s transmittance spectra obtained with the transfer matrix method. The refractive indices that provide the best fit of the position of the gap and transmittance peaks are given in the capture of this figure. It is well known that there are two primary sources of photon loss in porous silicon multilayer structures, photon absorption, and Rayleigh scattering. In the spectra, it can be seen that the modes with wavelengths below the gap are suppressed by absorption, while in the modes with wavelengths above it, the loss is related to Rayleigh scattering of light by the microscopic disorder of porous silicon [22, 23]. The red arrow indicates the transmittance of the extended optical mode at 633 nm. The distribution of the electric field in this wavelength is shown in Fig. 1. The experimental setup details for the oscillation measurements of optical MC are shown in Fig. 4. Figure 5a and b shows the experimental results of the oscillation measurements of the optical MC at the extended optical mode wavelength at 633 nm.

Fig. 3 The transmittance spectra of the optical microcavity. The following parameters have been used: n 1 = 1.37, d1 = 163.7 nm and n 2 = 2.07, d2 = 108.3 nm, respectively. The red arrow indicates the transmittance of the extended optical mode at 633 nm

296

H. Avalos-Sánchez et al.

Fig. 4 Experimental setup for the forced oscillations measurements, where the He-Ne laser, mechanical chopper, the optical microcavity, the vibrometer laser, the vibrometer interface, and the computer are observed

In Fig. 5a, we observe the experimental velocity time series. Figure 5b shows the power spectral density (PSD) of the velocity time series, and it is notorious that the optical MC vibrates mainly at 45 Hz. Still, the appearance of high harmonics is evident. In the same way, Fig. 6a and b shows the experimental results for a driven frequency of 75 Hz. The appearance of high harmonics is also evident. The light power was 45 mW with an angle of incidence of 5◦ . The size of the laser spot was the same, approximately 3 mm2 . We found the experimental values z P = 4.49 microns, V P = 1.27 mm/s and z P = 1.97 microns, V P = 0.93 mm/s for 45 Hz and 75 Hz, respectively. However, no autooscillation condition (15) was found. We simulated Eq. (13) using Mathematica, and the best fit we found uses w0 / h ≈ 11. Still, the parameter h needs to be multiplied by 16.4 instead of 2 in the descending part, and we used the experimental value n light = 0.88. We added a zero mean random noise to the simulated velocity time series to account for uncontrollable vibration effects.

The Influence of an Extended Optical Mode …

297

Fig. 5 Forced oscillation experimental and theoretical results for the input frequency of 45 Hz. a Experimental velocity time series. b Power spectral density for the velocity. c Theoretical velocity time series. d Theoretical power spectral density

Fig. 6 Forced oscillation experimental and theoretical results for the input frequency of 75 Hz. a Experimental velocity time series. b Power spectral density for the velocity. c Theoretical velocity time series. d Theoretical power spectral density

298

H. Avalos-Sánchez et al.

5 Conclusions In this work, we used light to induce electromagnetic forces in a photonic crystal structure using an extended photonic mode. In the past, the electromagnetic force could be increased by working at a resonance state (localized photonic mode). In the latter case, the force density was 1000 higher than in the present case. However, when the function generator regulated the light pumping, the structure remained stable for several hours, as in our previous experiments. We did not observe any natural self-auto-oscillations, and a particular configuration needs to be provided as before. These findings expand our last work results and proof that other photonic modes can be used to generate electromagnetic forces in these structures. In the future, we will explore using evanescent photonic modes to create this kind of force and new materials. Acknowledgements This work was supported by CONACYT (Cátedras Project No. 3208) and Grant A1-S-38743.

References 1. Xiao YF, Zou CL, Gong Q, Yang L (eds) (2020) Ultra-high-Q optical microcavities. World Scientific, Singapore 2. Lin G, Coillet A, Chembo YK (2017) Adv Opt Photon 9:828 3. Lu YK et al (2018) Spontaneous T-symmetry breaking and exceptional points in cavity quantum electrodynamics systems. Sci Bull 63:1096 4. Aspelmeyer M, Kippenberg TJ, Marquardt F (2014) Rev Mod Phys 86:1391 5. He L, Özdemir SK, Yang L (2013) Laser Photon Rev 7:60 6. Zhu J, Özdemir SK, Xiao YF, Li L, He L, Chen D-R, Yang L (2010) Nat Photon 4:46 7. Zhu J, Özdemir SK, He L, Chen DR, Yang L (2011) Opt Express 19:16195 8. Wang Y, Shen P, Liu J, Xue Y, Wang Y, Yao M, Shen L (2019) Solar RRL 3(8):1900181 9. Li JM, Dong TL, Shan GJ (2009) Prog Electromagn Res 10:59 10. Ashkin A (1970) Phys Rev Lett 24:156 11. Ashkin A, Dziedzic JM (1987) Science 235:1517 12. Boales JA, Mateen F, Mohanty P (2017) Sci Rep 7(1):1 13. Metzger CH, Karrai K (2004) Cavity cooling of a microlever. Nature 432(7020):1002–1005 14. Wilson-Rae I, Nooshi N, Zwerger W, Kippenberg TJ (2007) Phys Rev Lett 99:093901 15. Lugo JE, Doti R, Sánchez N, de la Mora MB, del Río JA, Faubert J (2014) Sci Rep 4:3705 16. Lugo JE, Doti R, Sanchez N, Faubert J (2014) J Nanophoton 8(1):083071 17. Sánchez-Castro N, Palomino-Ovando MA, Estrada-Wiese D, Valladares NX, del Río JA, De la Mora MB, Lugo JE (2018) Materials 11(5):854 18. See, e.g., Yen P (1988) Optical waves in layered media. Wiley, New York 19. Mizrahi A, Schachter L (2006) Phys Rev E 74:036504 20. Vitt AA, Khakin SE, Andronov AA (2013) Elsevier, Ed. New York 21. Jenkins A (2013) Phys Rep 525:2 22. Toledo-Solano M, Rubo YG, Arenas M, del Río JA (2005) Physica Status Solidi (c) 2:3544 (2005) 23. Rubo YG, Toledo-Solano M, del Río JA (2005) Physica Status Solidi (a) 202:2626

Synthesis and Characterization of Fe3 O4 @SiO2 Core/shell Nanocomposite Films A. J. Carmona Carmona , H. Avalos-Sánchez, E. Y. Hernández-Méndez, M. A. Palomino-Ovando, K. Misaghian, J. E. Lugo, J. J. Gervacio-Arciniega, and M. Toledo-Solano

Abstract Fe3 O4 @SiO2 core/shell nanocomposite was prepared by sol–gel method using TEOS as silica precursor. The colloidal spheres were concentrated on the surface of a glass substrate during evaporation of water in air at 80 °C; they formed hexagonal compact layers and built up an opaline film. The core/shell nanocomposite was investigated by different characterization methods (XRD, AFM, SEM, and FTIR). X-ray diffraction (XRD) identified the crystal structure of Fe3 O4 and showed an amorphous silica layer. It was observed that there was a reduction in magnetization when comparing the Fe3 O4 nanoparticles with the Fe3 O4 @SiO2 core/ shell nanocomposite. This reduction could be due to the coating process with an antimagnetic inorganic amorphous silica inorganic silica layer. The core size was found to be an average size of 11 nm identified by atomic force microscopy (AFM). The near-spherical shape and particle size of Fe3 O4 @SiO2 were revealed by scanning electron microscopy (SEM). The Fe3 O4 @SiO2 core/shell nanocomposite showed excellent dispersion property and magnetic behavior close to superparamagnetic materials. Their intriguing characteristics and how they react in the presence of A. J. Carmona Carmona (B) · H. Avalos-Sánchez · E. Y. Hernández-Méndez · M. A. Palomino-Ovando · J. E. Lugo Benemérita Universidad Autónoma de Puebla, Facultad de Ciencias Físico—Matemáticas, Puebla, México e-mail: [email protected] J. E. Lugo e-mail: [email protected] K. Misaghian · J. E. Lugo Faubert Lab, Université de Montréal, École d’optométrie, 3744 Jean Brillant, Montréal, Québec H3T 1P1, Canada Sage-Sentinel Smart Solutions, 1919-1 Tancha, Onna-Son Kunigami-Gun, Okinawa 904-0495, Japan J. J. Gervacio-Arciniega · M. Toledo-Solano CONACYT-Benemérita, Universidad Autónoma de Puebla. Facultad de Ciencias Físico—Matemáticas, Puebla, México e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_23

299

300

A. J. Carmona Carmona et al.

external magnetic fields make them a potential contender for various applications, including data storage, sensing, and biomedicine. Keywords Composites · Fe3 O4 @SiO2 · Core–shell

1 Introduction Among the various nanostructured materials, hybrid nanomaterials based on magnetic NPs have attracted growing interest due to their unique magnetic properties. The incorporation of magnetic NPs of Fe3 O4 in a matrix of SiO2 spheres can block the aggregation of NPs during the synthesis process and increase the catalysts’ durability [1, 2]. Furthermore, using these structures as catalysts has excellent advantages, such as a high surface area and a well-defined pore size, which increases their photocatalytic activity [3]. The construction of these materials by the vertical codeposition technique allows the SiO2 sphere matrix to usually adopt a face-centered cubic packing (FCC) with voids of ~26% in its volume [4, 5], in which the magnetic NPs are easily infiltrated through these voids [5]. Iron oxide NPs have been extensively studied for decades due to their multifunctional properties, such as small size, high magnetism, and low toxicity [6]. Amorphous silicas play an essential role in many fields since siliceous materials are used as adsorbents, catalysts, nanomaterial supports, stationary chromatographic phases, synthesis of ultrafiltration membranes, and other related large-area applications with porosity [7]. This paper reports the fabrication process of ordered magnetic composites with the infiltration of Fe3 O4 NPs, which were obtained by encapsulating Fe3 O4 NPs inside SiO2 opal microspheres. These structures were produced synchronously with the deposition of the colloids. The fabricated hybrid-colloidal crystals were studied for their photocatalytic properties at two wavelengths varying from the UV to the visible range. Also, to study the NPs in a low oxygen concentration environment to avoid oxidation due to their high reactivity. This study aims to fabricate Fe3 O4 @SiO2 nanoparticles with core/shell structure using tetraethyl orthosilicate (TEOS) as a precursor, which is often used in industry to produce SiO2 with higher adaptability. This study synthesized a novel magnetic core/shell nanocomposite using TEOS via sol–gel method. This nanocomposite was characterized by Fourier transform infrared (FTIR), Scanning Electron microscope (SEM), X-ray diffraction (XRD), and atomic force microscopy (AFM). Their interesting properties and how they behave under an applied external field make them potential candidates for various applications, such as data storage, sensing, and biomedicine. Focusing on the biomedical aspect, some new approaches are worth mentioning cell manipulation and separation, contrast enhancing agents for magnetic resonance imaging, and magnetomechanically induced cell death.

Synthesis and Characterization of Fe3 O4 @SiO2 Core/shell …

301

2 Experimental 2.1 Materials Iron (III) chloride [FeCl3 (97%), Aldrich©], Iron (II) chloride [FeCl2 (99%), Aldrich©], Deoxygenated deionized water, Sodium Hydroxide [NaOH (99%), Aldrich©], Tetramethylammonium hydroxide [TMAOH (98%), Aldrich©], Ammonium hydroxide [NH4 OH (28%), J.T. Baker©], Ethanol [C2 H5 OH (99.9%), J.T. Baker©], Tetraethyl orthosilicate [TEOS (98%), Aldrich©]. All of the chemicals were used without further purification and were analytical grade.

2.2 Synthesis of Fe3 O4 NPs The magnetite NPs were synthesized according to the co-precipitation method reported by Hariani et al. [8]. From ferric and ferrous salts, 16.25 g of FeCl3 (97%, Aldrich©) and 6.35 g of FeCl2 (99%, Aldrich©) were dissolved in 200 ml of deoxygenated deionized water under an atmosphere of N2 and stirring 2000 rpm. After 60 min, 16.00 g of NaOH (99%, Aldrich©) were added, and the solution was heated to 30 °C with vigorous stirring. Subsequently, the reaction system was kept at 70ºC for 5 h until the precipitation was complete. Finally, the system was allowed to cool to room temperature. The precipitates were separated by a permanent magnet, washed three times with deoxygenated deionized water with a final wash in acetone, and dried at a temperature of 70 °C in a drying oven. The chemical reaction can be expressed as follows Eq. (1): Fe2+ + 2Fe3+ + 8O H − → Fe3 O4 + 4H2 O

(1)

2.3 Core/Shell Fe3 O4 @SiO2 Preparation 2 g of iron oxide magnetic nanoparticles previously prepared were added to 20 ml of deoxygenated deionized water following the Stöber–Fink-Bohn method [9]. Starting from two solutions, the first comprises 20 ml of NH4 OH (28%, J.T. Baker©), 20 ml of ethanol (99.9%, J.T. Baker©), and 20 ml of deoxygenated deionized water. The second solution consisted of 5 ml of Tetraethyl orthosilicate (TEOS, 98%, Aldrich©) and 20 ml of ethanol (JT Baker 99%). Both solutions were kept under stirring for 5 min and subsequently mixed, remaining under vigorous stirring for 1 h at room

302

A. J. Carmona Carmona et al.

temperature. The Core/Shell Fe3 O4 @SiO2 were separated from the suspension by centrifugation, washed three times with deoxygenated deionized water, and then re-dispersed in deoxygenated deionized water at a concentration of 0.002 M. The chemical reaction was as follows (2): Si(O C H2 C H3 )4 + 2H2 O → Si O2 + 4C H3 C H2 O H

(2)

2.4 Synthesis of the Films of the Fe3 O4 @SiO2 Composites The films of the Fe3 O4 @SiO2 composites were synthesized following the methodology reported by Carmona-Carmona et al. [10]. First, the sample was prepared as follows: On a glass substrate (26 × 9 mm) which was treated with (1:3 v/v) H2 O2 :H2 SO4 for 30 min to create a clean, hydrophilic surface, it was vertically immersed in a 30-ml container containing 14 ml of the colloidal suspension of Fe3 O4 @SiO2 compounds (0.2323 g of spheres) at a temperature of 80º C for 8 h in a drying oven. After evaporation of the water, the Fe3 O4 @SiO2 colloids were neatly packed into a colloidal crystal under the induction of capillary force (Fig. 1). Fig. 1 Schematic illustration of the fabrication process of Fe3 O4 @SiO2 composites, a figure inspired by Scheme 1 of ref [11]

Synthesis and Characterization of Fe3 O4 @SiO2 Core/shell …

303

2.5 Characterization Techniques Structural and crystalline characterization was performed by X-ray diffraction (XRD) data obtained on a Bruker© D8 Discover 20 kV diffractometer using Cu Kα radiation (λ = 1.54056 Å). JEOL© JSM-6610LV scanning electron microscope (SEM) equipment and a Park Systems XE-7 atomic force microscope (AFM) were used for morphological characterization. The factional groups have been shown by room temperature micro-Raman scattering measured with a Horiba Jobin Yvon LabRAM© HR micro-Raman spectrometer based on the Olympus© BX41 microscope using a 632.8 nm line of a He–Ne laser.

3 Results and Discussions Figure 2 shows the SEM micrographs of the Fe3 O4 NP and Fe3 O4 @SiO2 compounds. The Fe3 O4 @SiO2 composites (Fig. 2) have a hemispherical morphology. One can see that the structure of the basal plane of the film looks like compact hexagonal (FCC), in which the rows of Fe3 O4 @SiO2 spheres are parallel to the growth direction of the film like the (111) plane. These have an average diameter of 198 nm (standard deviation of 11 nm and size of the interstices of approximately 45 nm). According to references [11], the methodology assumes that Fe3 O4 NPs are inside and SiO2 outside. And we have been careful to use this methodology correctly. However, in future work, we will be completing this analysis using TEM. Figure 3 shows the AFM micrographs of the Fe3 O4 NPs. These NPs have a hemispherical morphology. In this micrograph, it is possible to observe how the NPs tend to agglomerate without the presence of a surfactant agent or any additional coating. Fig. 2 SEM micrographs of Fe3 O4 @SiO2 composites. The size distribution of the Fe3 O4 @SiO2 composites is also shown

304

A. J. Carmona Carmona et al.

Fig. 3 AFM micrographs of Fe3 O4 Nanoparticles. The size distribution of the Fe3 O4 NPs is also shown

During forming of the Fe3 O4 @SiO2 composites, hydrolysis and condensation reactions occur due to the presence of water which reacts with the silicon precursor (TEOS), resulting in OH− groups on the surface, also called silanols (Si–OH). On the other hand, the substrate surface (which was treated with a (1:3 v/v) H2 O2 :H2 SO4 solution before the film deposition process) acquired a hydrophilic surface when the surface OH groups were formed from hydrogen peroxide. Therefore, the hydrophilic substrate and the solution of Fe3 O4 @SiO2 composites spheres will interact through a nucleophilic attack of the oxygen of the Fe3 O4 @SiO2 composites particles on the silicon atom of the substrate, fixing on the surface during the co-assembly process. The relevant reaction is illustrated in (Eq. 3). Si − O − H + H − O − Si → Si − O − Si + H2 O

(3)

From the XRD diffractograms (Fig. 4), the SiO2 shell presents a broad peak between 20º and 30º, indicating that the SiO2 shell is amorphous. Usually, the products obtained by alkoxide hydrolysis are amorphous or poorly crystallized [12]. This result agrees with what has already been reported in the literature [13]. In the same way, the XRD diffractogram corresponding to the Fe3 O4 NPs confirms their crystallinity, in addition to the Fe3 O4 phase (Fig. 3). The characteristic peaks of the Fe3 O4 NPs correspond very well to the standard magnetite card (JCPDS:19-0629) [14]. The diffraction patterns of the composites are inherent characteristics of the patterns corresponding to the SiO2 microspheres and the Fe3 O4 NPs. Moreover, the interaction between molecules does not affect the positions and shapes of the peaks compared with the SiO2 and pure Fe3 O4 , like Ranfang Zuo et al. [15]. The size of the crystals within the Fe3 O4 NPs was determined with the XRD diffraction patterns corresponding to the Fe3 O4 sample and the Debye–Scherrer formula [16], obtaining an average size of 11 nm (Eq. 4). D = K λ/(Bhkl Cosθ )

(4)

Synthesis and Characterization of Fe3 O4 @SiO2 Core/shell …

305

Fig. 4 XRD diffractograms of SiO2 shell, Fe3 O4 NPs, and Fe3 O4 @SiO2 core/shell range from 20 to 80º

The micro-Raman spectra corresponding to Fe3 O4 @SiO2 composites (Fig. 5) present peaks similar to those corresponding to the SiO2 sample, where the peaks centered at 432 and 713 cm−1 are the most defined. The peak centered at 713 cm−1 shows slight shifting toward lower wavenumbers as the concentration of the NPs increases in the matrix of the SiO2 shell, varying from 713 to 701 cm−1 . A band appears centered at 663 cm−1 , which is attributed to the interaction between the Fe3 O4 NPs with the SiO2 microspheres. The surface HO− of the SiO2 shell interacts with the oxygens of N(CH3 )4 OH through a nucleophilic attack of the water, breaking the Fe–O–N bond. This protonation forms a hydrogen bond with the oxygen linked to the Fe of the Fe3 O4 molecule, which causes it to separate in the form of H2 O, and therefore the Fe will form a bond with the oxygen of the SiO2 molecule generating a Fe–O–Si bond [17] (Fig. 5). The FTIR spectrum of Fe3 O4 @SiO2 film (Fig. 6) shows the stretch vibration mode of Si–O–Si bonds in the region between 1070 and 1080 cm-1 and an absorption band of 3650 to 3200 cm−1 , which corresponds to the stretching mode of the O–H bond (Fig. 6). In the case of Fe3 O4 NPs, their spectrum shows a faint absorption band

306

A. J. Carmona Carmona et al.

SiO2

607

Fe3O4

501

Fe3O4@SiO2

599

490

663

678

430

371

Intensity (a.u.)

701

494

713

432

Fig. 5 Micro-Raman spectra of Fe3 O4 NPs, SiO2 , and Fe3 O4 @SiO2 core/shell

300

400

500

600

700

800

-1

Raman Shift (cm )

close to 500 cm–1 ; this band is related to the stretching mode of the Fe–O bonds [18]. It is also possible to observe an absorption band around 1490 cm–1 , which is assigned to Fe–O stretch mode, corresponding to Fe3 O4 @SiO2 composites. Similar to what happens with the micro-Raman spectra, these inherent characteristics of the Fe3 O4 @SiO2 as well as the Fe3 O4 NPs. The 1070–1080 cm−1 band predominates in the spectra; this can be explained by the volume difference between SiO2 and Fe3 O4 NPs. The interaction between SiO2 and Fe3 O4 NPs can be observed with the overlap of the stretching vibration bands of the Si–O–Si bond in the region 1070 to 1080 cm−1 and the Fe–O–Si stretching vibration in the region 1050 to 1250 cm–1 [19]. The peaks at 2921 cm−1 and 2848 cm−1 can be attributed to − CH3 bonds (stretching vibration), corresponding to N(CH3 )4 OH, which was used as a surfactant for Fe3 O4 NPs. Besides, the band’s appearance at ∼3436 cm−1 is attributed to the stretching vibration of N–H [20]. The optical absorption spectra are obtained by graphing [F(R)hν]2 as a function of the photon energy (hν), where F(R) is the Kubelka–Munk relationship [21]. That is F(R) = (1-R)2/2R, where R is the diffuse reflectance. In Fig. 7, we can see how

Synthesis and Characterization of Fe3 O4 @SiO2 Core/shell …

795 558

1043

1733

Fig. 6 FTIR spectra of the Fe3 O4 NPs, SiO2 , and the Fe3 O4 @SiO2 composites, in the intervals of 500 to 4000 cm−1

307

619

840

1490 1337

1631

Transmittance (%)

944

SiO2

944

795

3436

1457

Fe3O4

Fe3O4@SiO2 4000

3500

3000

2500

2000

1500

1000

500

-1

Wavenumber (cm )

the energy of the forbidden band (Eg) was estimated by extrapolating the absorption edge to the axis of the photon energy through a linear fit [22] so that the values of Eg in the direct transition of SiO2 turned out to be 3.8 eV. This value agrees with the Eg presented of 3.8 eV in the case of SiO2 nanostructures [10]. The mechanism by which this value of Eg is reached is the conversion of silanol groups ODC (1) into ODC (11), which are related to the optical absorption bands of ~5 eV [23]. Likewise, it was possible to reproduce the Eg value of the Fe3 O4 NPs, which was reported at 2.1 eV. Figure 8 shows how the Eg value varies from 3.8 eV (SiO2 sample) to 2.1 eV (Fe3 O4 NPs sample) and how the concentration of NPs in SC1 to SC5 increases. The Eg value decreases until it reaches a value close to Fe3 O4 . The results obtained are closely associated with the size quantification effect rather than its physical properties, such as its surface areas, which significantly modify the energy level in the localized state, similar to what happens in [24]. The increase in smaller particles changes the wavelength and position of the forbidden band, resulting in a blueshift of the Eg forbidden band compared to bulk particles.

308

A. J. Carmona Carmona et al.

Fig. 7 Graphs of F(R) versus energy of the samples SiO2 , Fe3 O4 , and the composites, we can observe how the gap value varies from 3.8 eV for SiO2 to 2.1 eV for Fe3 O4

4 Conclusions Fe3 O4 @SiO2 microsphere composite films with an opaline structure have been synthesized in this work. These films were synthesized using the lateral co-assembly technique. The relationship between the Fe3 O4 NPs and the SiO2 opal microspheres will affect the quality of the hybrid-colloidal crystal and its optical properties. It was observed that the use of TEOS as a precursor for silica shells would lead to an increase in the magnetization of the core/shell nanocomposite. The magnetization for Fe3 O4 @SiO2 (20 emu/g) was lower than for pure Fe3 O4 (70 emu/g). These results were obtained using (VSM). The reason behind this decrease is that the coating process is performed with an antimagnetic amorphous silica layer. SEM images revealed a nearly spherical shape of the synthesized core/shell nanocomposite. AFM analysis showed that the Fe3 O4 nanoparticles were 23.5 nm in size. At the same time, FTIR spectra are also used to define the type of compound created and show the functional group in each Fe3 O4 core-wrap nanocomposite and pure Fe3 O4 @SiO2 . The patterns of each synthesized material were identified for Fe3 O4 by using XRD. It was the spinal cubic structure, and Fe3 O4 @SiO2 was amorphous. Finally, these magnetic nanostructures promise great potential in biomedical applications. These nanoarchitectures demonstrated considerable advantages compared to typical superparamagnetic nanoparticles. However, much work remains to be done in this area, such as improving magnetic nanostructures and understanding the interaction between biological systems and nanoarchitectures. Therefore, it is expected that this theme will continue to be addressed by several authors over time, possibly leading to the development of new diagnostic and therapeutic techniques that can improve the life quality of the patients [25].

Synthesis and Characterization of Fe3 O4 @SiO2 Core/shell …

309

Acknowledgements This work was supported by CONACYT (Cátedras Project No. 3208) and Grant A1-S-38743.

References 1. Cheng W, Tang K, Qi Y, Sheng J, Liu Z (2010) One-step synthesis of superparamagnetic monodisperse porous Fe3 O4 hollow and core-shell spheres. J Mater Chem 20(9):1799–1805. https://doi.org/10.1039/b919164j 2. Liu Z, Bai H, Sun DD (2011) Facile fabrication of porous chitosan/TiO2/Fe3 O4 microspheres with multifunction for water purifications. New J Chem 35(1):137–140. https://doi.org/10. 1039/c0nj00593b 3. Shokouhimehr M, Piao Y, Kim J, Jang Y, Hyeon T (2007) A magnetically recyclable nanocomposite catalyst for olefin epoxidation. Angew Chemie 119(37):7169–7173. https://doi.org/10. 1002/ange.200702386 4. Li X, Tao F, Jiang Y, Xu Z (2007) 3-D ordered macroporous cuprous oxide: Fabrication, optical, and photoelectrochemical properties. J Colloid Interface Sci 308(2):460–465. https://doi.org/ 10.1016/j.jcis.2006.12.044 5. Cong H, Cao W (2004) Macroporous Au materials prepared from colloidal crystals as templates. J Colloid Interface Sci 278(2):423–427. https://doi.org/10.1016/j.jcis.2004.06.011 6. Chang PR, Yu J, Ma X, Anderson DP (2011) Polysaccharides as stabilizers for the synthesis of magnetic nanoparticles. Carbohydr Polym 83(2):640–644. https://doi.org/10.1016/j.carbpol. 2010.08.027 7. Roque-Malherbe RMA (2018) Adsorption and diffusion in nanoporous materials. CRC Press 8. Hariani PL, Faizal M, Ridwan R, Marsi M, Setiabudidaya D (2013) Synthesis and properties of Fe3 O4 nanoparticles by Co-precipitation method to removal procion dye. Int J Environ Sci Dev 4(3):336–340. https://doi.org/10.7763/IJESD.2013.V4.366 9. Stöber W, Fink A, Bohn E (1968) Controlled growth of monodisperse silica spheres in the micron size range. J Colloid Interface Sci 26(1):62–69. https://doi.org/10.1016/0021-979 7(68)90272-5 10. Carmona-Carmona AJ, Palomino-Ovando MA, Hernández-Cristobal O, Sánchez-Mora E, Toledo-Solano M (2016) Synthesis and characterization of magnetic opal/Fe3 O4 colloidal crystal. J Cryst Growth. https://doi.org/10.1016/j.jcrysgro.2016.12.105 11. Du GH, Liu ZL, Xia X, Chu Q, Zhang SM (2006) Characterization and application of Fe3 O4 / SiO2 nanocomposites. J Sol-Gel Sci Technol 39(3):285–291. https://doi.org/10.1007/s10971006-7780-5 12. Cong H, Yu B (2011) Fabrication of superparamagnetic macroporous Fe3 O4 and its derivates using colloidal crystals as templates. J Colloid Interface Sci 353(1):131–136. https://doi.org/ 10.1016/j.jcis.2010.09.040 13. Ruso JM, Gravina AN, D’Elía NL, Messina PV (2013) Highly efficient photoluminescence of SiO2 and Ce-SiO2 microfibres and microspheres. Dalton Trans 42:7991–8000. https://doi.org/ 10.1039/c3dt32936d 14. Bachan N, Asha A, Jeyarani WJ, Kumar DA, Shyla JM (2015) A comparative investigation on the structural, optical and electrical properties of SIO2 –FE3 O4 core-shell nanostructures with their single components. Acta Metall Sin (English Lett) 28(11):1317–1325. https://doi.org/10. 1007/s40195-015-0328-3 15. Yang S et al (2009) Decolorization of methylene blue by heterogeneous Fenton reaction using Fe3 –xTixO4 (0 ≤ x ≤ 0.78) at neutral pH values. Appl Catal B Environ 89(3–4):527–535. https://doi.org/10.1016/j.apcatb.2009.01.012 16. Zuo R et al (2014) Photocatalytic degradation of methylene blue using TiO2 Impregnated Diatomite. Adv Mater Sci Eng 2014:1–7. https://doi.org/10.1155/2014/170148

310

A. J. Carmona Carmona et al.

17. Awwad AM, Salem NM (2013) A green and facile approach for synthesis of magnetite nanoparticles. Nanosci Nanotechnol 2(6):208–213. https://doi.org/10.5923/j.nn.20120206.09 18. Chourpa I et al (2005) Molecular composition of iron oxide nanoparticles, precursors for magnetic drug targeting, as characterized by confocal Raman microspectroscopy. Analyst 130(10):1395–1403. https://doi.org/10.1039/b419004a 19. Luong NH, Phu ND, Hai NH, Dieu Thuy NT (2011) Surface modification of SiO2 -coated FePt nanoparticles with Amino Groups. e-J Surf Sci Nanotechnol 9:536–538. https://doi.org/ 10.1380/ejssnt.2011.536 20. Van Quy D et al (2013) Synthesis of silica-coated magnetic nanoparticles and application in the detection of pathogenic viruses. J Nanomater 2013. https://doi.org/10.1155/2013/603940 21. Yimin D et al (2018) Preparation of Congo red functionalized Fe3 O4 @SiO2 nanoparticle and its application for the removal of methylene blue. Colloids Surfaces A Physicochem Eng Asp 550(March):90–98. https://doi.org/10.1016/j.colsurfa.2018.04.033 22. Escobedo Morales UPA, Sanchez Mora E, Morales A, Mora E, Pal U (2007) Use of diffuse reflectance spectroscopy for optical characterization of un-supported nanostructures. Rev Mex Fis S 53(5):18–22. http://www.researchgate.net/publication/229050010_Use_of_diffuse_refl ectance_spectroscopy_for_optical_characterization_of_un-supported_nanostructures/file/79e 41507eead49bb27.pdf 23. Fritzsche H, Tauc J (1974) Amorphous and liquid semiconductors, vol 254. Plenum Press. New York 24. Salh R (2011) Defect related luminescence in silicon dioxide network: a review. Cryst Silicon - Prop Uses 135–172. https://doi.org/10.5772/22607 25. Anpo M, Shima T, Kodama S, Kubokawa Y (1987) Photocatalytic hydrogenation of CH3 CCH with H2 O on small-particle TiO2 : aize quantization effects and reaction intermediates. J Phys Chem 91(16):4305–4310. https://doi.org/10.1021/j100300a021 26. Peixoto L, Magalhães R, Navas D, Moraes S, Redondo C, Morales R, Araújo JP, Sousa CT (2020) Magnetic nanostructures for emerging biomedical applications. Appl Phys Rev 7(1). https://doi.org/10.1063/1.5121702

Optical and Structural Study of a Fibonacci Structure Manufactured by Porous Silicon and Porous SiO2 María R. Jiménez Vivanco, Raúl Herrera Becerra, Miller Toledo Solano, Khashayar Misaghian, and J. E. Lugo

Abstract In this work, we reported a Fibonacci structure based on porous silicon and porous SiO2 consisting of two Bragg reflectors and two defect modes between them. The Fibonacci structure was manufactured by electrochemical etching using a silicon wafer, type p+, and was subjected to two stage of dry oxidation. In this way, we obtained an oxidized Fibonacci structure that exhibit two defect modes in the reflectance spectrum. Furthermore, we calculated the reflection spectrum on Fibonacci structure in the UV–VIS range before and after dry oxidation using the transfer-matrix method. By cross-sectional SEM measurements, we obtained the individual thickness of each layer that make Fibonacci structure up as well as its total thickness. Additionally, we obtained the pore diameter before and after dry oxidation by superficial-section SEM. Finally, optical characterization was taken at 30° by UV–VIS. Keywords Fibonacci structure · Pore diameter · Porous SiO2

1 Introduction Fibonacci structure is a periodic/quasiperiodic structure that can control the light propagation. It is making up of a quasiperiodic structure between two Bragg Reflectors (BR), which has two defect modes, where the propagation of electromagnetic M. R. J. Vivanco (B) · R. H. Becerra Institute of Physics, UNAM, Circuito de La Investigación Científica, Ciudad Universitaria, 04510 Mexico City, Mexico e-mail: [email protected] M. T. Solano CONACYT-Facultad de Ciencias Fisico-Matematicas, Benemerita Universidad Autonoma de Puebla, Col. San Manuel Ciudad Universitaria, Av. San Claudio Y Av. 18 Sur, 72570 Puebla Pue., Mexico K. Misaghian · J. E. Lugo Faubert Lab, School of Optometry, University of Montreal, Montreal, QC H3T1P1, Canada © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_24

311

312

M. R. J. Vivanco et al.

waves is forbidden in determinate frequency and energy. Quasicrystals are nonperiodic structure and consist of a deterministic generation [1, 2], it can be considered as a class of complex dielectric structures between ordered crystals and fully random structures [3]. They offer the possibility to create several photonics crystal, which can present anyone, one, two and three or more localized mode and tuning their photonic band gap [4, 5]. Fibonacci system consists of narrow resonances separated by numerous pseudo band gaps, similar to the band gaps of a microcavity, where the defect modes are localized [6–8]. They have been used for communication application [9–11], the blood plasma and cancer cells detections [12] and possibly as a ultrasensitive optical sensor [6]. The confinement of electromagnetic fields in a Fibonacci structure is related stronger with the position of the defect modes, which depends of the refractive index and thickness layer [2]. Any change in these two parameters can be observed in the reflection, transmission, or absorption spectrum [13]. Escorcia-García J have provided a theoretical analysis of the propagation waves in hybrid periodic/Fibonacci dielectric multilayers, where the resonant microcavity with a strong defect mode can be manufactured in hybrid systems with a small number of layers [4]. A study of the optical properties of polytype Fibonacci porous silicon multilayers have reported leading to the appearance of multiple photonic band gaps, which cannot be possible to obtain in a periodic structure [1]. Furthermore, free-standing Fibonacci structure based on porous silicon was designed in the range infrared by Mhe Ghulinyan to study the propagation of optical pulses through Fibonacci quasicrystals, with a special focus on the wavelength regime around the band edge [8]. R. Nava proposed asymmetric Fibonacci structure that presented perfect transmission of light at different wavelength, which demonstrated that a symmetry arrays in a Fibonacci structure is a sufficient condition but not necessary to generate perfect transmission peaks [14]. On the other hand, to reduce the optical losses and stabilize the optical parameters of porous silicon structures several authors have proposed different oxidize methods, among them we can mention thermal oxidation [3, 7, 15] and electrochemical oxidation [16], where thermal oxidation is based in changing the temperature, time oxidation, and oxygen flow during the oxidation process [17, 18]. In this way, it has been possible to create photonic crystals decreasing their optical losses, refractive index as well as refractive index contrast, which allows to narrow the photonic band gap of periodic structure [18]. Besides, it is possible to archive a wavelength shift to high frequency or low frequency in the position of the defect mode when the porous structure is thermally oxidized [17], electrochemically oxidized [16] or added into the pore alcohol or other kinds of chemical substance [7, 16, 19]. The wavelength shift is due to the refractive index change in each layer of the porous structure. Most of the report works have studied porous silicon quasiperiodic structure in the orange range [14], infrared range [3], or in hypersonic frequencies [20]. Until now, any report based on porous silicon quasiperiodic structures in the UV– VIS range has been fabricated. Focusing on the study of the localization of light waves within Fibonacci quasiperiodic structure we proposed a Fibonacci structure based on porous silicon, where two defect modes can be localized at 550 and 594 nm inside of the electromagnetic spectrum from yellow-orange. However, to reach a

Optical and Structural Study of a Fibonacci Structure Manufactured …

313

wavelength shift of the localized modes, the Fibonacci structure was oxidized, so the two localized modes move to short wavelength at 393 and 432 nm. In this work, pore diameter of the porous matrix of a Fibonacci structure before and after dry oxidation is reported, where the pore diameter of porous SiO2 is smaller than porous silicon.

2 Materials and Methods 2.1 Fabrication of Porous Silicon and Si-SiO2 Porous Periodic/Quasiperiodic Structure Fibonacci structure was obtained by electrochemical anodization on Si wafer, type p+, 0.001–0.005 resistivity, (100) orientation, the Si wafer was immersed in a stirring solution of 20% HF for 5 min; it was rinsed with deionized water and dried in room temperature. A PC that has a program was used to control a current power source that deliver two different current densities to the Teflon cell, which has an etching area of 2.011 cm2 ; then an aqueous electrolyte of 40% HF and ethanol at 95.5% with a volume ration of 1:1 was placed in the Teflon cell to carry out the electrochemical anodization by means of a ring-shaped platinum electrode immersed in the electrolyte. Fibonacci structure is made up of a quasiperiodic structure between two BR with six periods, the last consisted in interchanging two different current pulses (80 mA/cm2 for 1.658 s and 3 mA/cm2 for 17.486 s), whereas quasiperiodic structure consisted of alternating current pulses with the following sequence: 80 mA/cm2 , 3 mA/cm2 , 80 mA/cm2 , 80 mA/cm2 , 3 mA/cm2 , 80 mA/cm2 , 3 mA/cm2 , 80 mA/ cm2 (calling of 4 order) using the same time mentioned above. When the process finished, a Fibonacci structure was obtained. Finally, the electrolyte was removed completely adding three time ethanol to the solution, so the sample was removed, rinsed with ethanol and dried at room temperature. The Fibonacci structure was subjected to two stages of dry oxidation; it was oxidized employing a Muffle furnace at 350 °C for 30 min and then 900 °C for 15 min. The optical characterization of the Fibonacci structure was carried out before and after dry oxidation with a Thermo Scientific Evolution 201 UV–VIS spectrometer at incidence angle of 30° degrees from 300 to 900 nm. Additionally, the morphology of PS and porous SiO2 structures was obtained by a JEOL FE-SEM JSM-7800 taking the cross-sectional SEM to obtain the layer numbers and the pore diameter by superficial-section SEM before and after dry oxidation.

314

M. R. J. Vivanco et al.

3 Results and Discussions 3.1 Reflectance Spectrum of a Periodic/Quasiperiodic Structure Fig. 1 exhibited the theoretical (black solid line) and experimental (orange solid line) spectra of a Fibonacci structure manufactured on Si wafer, type p+ , the values of high porosity (PH ) 76% and low porosity (PL ) 53% were obtained by gravimetric method. However, we used high and low porosities of 77% and 53% to fit the theoretical reflectance spectrum of porous silicon structure. There is a slight difference in porosities that corresponds to a small change in the refractive index. Theoretical reflectance spectrum was obtained employing the matrix method, whereas reflectance measurement was taken using a Thermo Scientific evolution 201 at 30 degree in the wavelength range from 300 to 900 nm. The unoxidized structure shows two defect modes with resonant wavelengths at 550 and 594 nm. Also, the theoretical result was fitted with the experimental results taking into account the thickness (dH and dL ) provided by SEM measurements and complex refractive index of PS obtained by Maxwell–Garnett equation [21, 22], where was achieved a good match between experimental and theoretical results (see Fig. 1). Optical and physical parameters for an unoxidized Fibonacci structure are shown in Table 1, where the silicon fraction and oxide fraction are respresented by the initials Fsi and Fox.

Fig. 1 The theoretical and experimental reflectance spectrum of an unoxidized Fibonacci structure is shown, the solid orange line represents the experimental result, and the solid black line shows the theoretical development of an unoxidized Fibonacci design

Optical and Structural Study of a Fibonacci Structure Manufactured …

315

Table 1 Parameters of Fibonacci structure Fibonacci

PL (%) PH (%) Fsi

Fox

Refractive index

Thickness (nm)

Unoxidized 53

77

47 23 0

0

nH = nL = 2.421–0.018 1.755–0.01

dH = 45

dL = 98.5

Oxidized

53

0

47

nH = nL = dH = 1.407–0.003 1.212–0.001 49

dL = 120

16

0

84

Figure 2 shows the theoretical (black solid line) and experimental (blue solid line) reflectance spectrum of an oxidized Fibonacci structure, which presents two defect modes at 393 and 432 nm to resonant wavelengths that correspond to the electromagnetic spectrum UV and blue. Theoretical reflectance spectrum was obtained by applying the matrix method and experimental reflectance spectrum was measured using a Thermo Scientific Evolution 201 at 30° in the wavelength range from 300 to 900 nm. As can be observed in Fig. 2, the defect modes in the Fibonacci structure were shifted to short wavelength after dry oxidation, the first was positioned at 550 nm and moved to 393 nm, the second defect mode was localized at 594 nm and shifted to 432 nm, we can observe a wavelength shift for the first localized mode of 157 nm and for the second 162 nm, this is due to a change of the optical thickness, physical thickness as well as refractive index. In this way, the porous silicon structure was converted into porous SiO2 structure, and then the refractive index decreased and reached the refractive index value of the SiO2 . Apart from reducing the refractive index, experimental reflectance spectrum decreased from 100 to 70% due to the optical losses by absorption. In spite of the contrast low in the refractive index of the layer dH and dL is possible to obtain Fibonacci structure based on porous SiO2 . The optical and physical parameters of the porous SiO2 structure are shown in Table 1. In addition, the theoretical spectrum of porous SiO2 structure was obtained considering high/low porosities with values of 53% and 16% and layers thickness of dH and dL (see Table 1). The last was obtained by SEM measurements. As can be seen in Fig. 2, the theoretical and experimental spectra of a porous SiO2 match very well. Complex refractive index of the individual layers of porous SiO2 was obtained by employing J. E. Lugo equation [18]. After dry oxidation the porosity and refractive index decreased, whereas physical thickness increased as has been reported in previous work for a periodic structure [18]. Besides, the new refractive index does not have imaginary part because SiO2 is considered a transparent material to short wavelength. Many authors have reported to obtain politype Fibonacci [1], asymmetric Fibonacci [14], and hypersonic Fibonacci [20] based on porous silicon. However, until now Fibonacci filter manufactured of porous silicon in short wavelength has not been possible to obtain due to the high absorption of silicon. However, some authors have used dry oxidation to shift the reflectance spectrum of some Fibonacci structure to make porous silicon up, moving the two localized mode to high energy [3]. On the other hand, the propagation of optical pulses through porous silicon Fibonacci has

316

M. R. J. Vivanco et al.

Fig. 2 Theoretical and experimental reflectance spectrum of an oxidized Fibonacci structure, the blue solid line displays the experimental reflectance spectrum of a Fibonacci structure based in porous SiO2 and black solid line corresponds to the theoretical reflectance spectrum

Fig. 3 Cross-sectional SEM micrograph of a Fibonacci structure unoxidized and oxidized with 30 layers on Si wafer, type p+ . a Cross-sectional SEM image of an unoxidized Fibonacci structure, the thickness of the high porosity layers (76%) is 98.5 nm and low porosity (53%) layers is 45 nm. b SEM image of the cross section of an oxidized Fibonacci structure of 30 layers with individual thickness of 120 nm and 49 nm, the values of 53% and 16% correspond to high/low porosity. The alternating dark/light layers correspond to high/low porosity (low/high refractive index)

Optical and Structural Study of a Fibonacci Structure Manufactured …

317

been reported in the infrared region [8]. Compared to other works is the first time that a Fibonacci structure based on Si-SiO2 porous is obtained in the UV-BLUE range.

3.2 Morphologic Characterization of a Periodic/ Quasiperiodic Structure Figure 3a depicts the high-resolution scanning electron microscopy (SEM) image of an unoxidized Fibonacci structure on a p+ substrate. It was manufactured with a quasicrystal of 4 order between two BR of 6 periods, the last was obtained alternating quarter-wave layer with low refractive index (nL ) and high refractive index (nH ), whereas the quasicrystal of 4 order was created by alternating quarterwave layers with refractive indexes that correspond to the following sequence (nL nH )(nL nL )(nH nL )(nH nL ), where nH > nL . The layers with high porosity can be observed in dark gray, and layers with low porosity are exhibited in light gray (Fig. 3a). It can be observed two defect modes in the Fibonacci structure, which has twice the thickness of high porosity layer. By SEM image we estimated a total thickness of the porous silicon Fibonacci structure of 2.349 µm, where individual layers have thickness of dL = 98.5 nm and dH = 45 nm, whereas the defect mode has a thickness of 197 nm. Figure 3b shows the cross-sectional SEM measurements of an oxidized Fibonacci structure. It was subjected to two stage of dry oxidation, where Fibonacci structure was oxidized at 350° C for 30 min and then at 900 °C for 15 min. The thicknesses of the oxidized structure were estimated by SEM measurements showing a total thickness of 2.535 µm with individual layer thicknesses of dL = 120 nm and dH = 49 nm, where the defect thickness is 240 nm. The morphology of a Fibonacci structure before and after dry oxidation is shown in the top-view SEM image in Fig. 4a, b. Figure 4a shows the top-view SEM image of an unoxidized Fibonacci structure, where the pores with average diameter of 18 nm is observed. Figure 4b displays the average diameter of the pores (13 nm) for an oxidized Fibonacci structure. There, it can be observed that the pores are uniformly distributed and have a spherical shape. After oxidation process, pore size decreased to 13 nm and no alteration in pore structure was noticed. Furthermore, in Fig. 4 both top-view SEM images, the pores are interconnected. It’s well-known that the pore size, distribution, and morphology are important parameters to limit the type of object to analyze that can be infiltrated into the porous matrix [23]. It has been reported that to optimize the PS sensitivity, the pore size should be as small as possible while still allowing the easy penetration of chemical materials [24]. Also, oxidizing the porous matrix is an easy way to avoid nanoparticles present in the environment penetrating the porous photonic crystal as microcavities, Bragg reflector, and Rugate filter stabilizing the pores and decreasing the native hydrophobicity of porous silicon [25]. As can be seen in this work, after dry oxidation the porous SiO2 structure still remains porous.

318

M. R. J. Vivanco et al.

Fig. 4 SEM top-view image depicting the morphology of a Fibonacci structure unoxidized and oxidized. a SEM top-view image displaying the morphology of an unoxidized Fibonacci structure, where the pore diameter is 18 nm. b Top-view SEM image depicting the morphology of an oxidized Fibonacci structure, with a pore diameter of about 13 nm

4 Conclusions We have exposed a simple method to create porous SiO2 Fibonacci structure with operation range in the UV and blue, where especially porous silicon structure cannot be obtained due to its high absorption that depends on the extinction coefficient. Two (Si and air) and three medium (Silicon, air, and SiO2 ) were taken into account to obtain the theoretical reflectance spectrum, where theoretical and experimental results matched very well. This is the first time that a Fibonacci structure working in the UV-BLUE range is presented, thanks to the dry oxidation the two defect modes were shifted to lower wavelength at 393 nm and 432 nm, the first mode had a wavelength shift of 157 nm and the second mode 162 nm. Also, we reported a pore diameter after dry oxidation with a value of 13 nm, where Fibonacci structure still remains porous. Additionally, this type of structure could be used as an optical sensor adding different photoluminescent nanoparticles inside the pores as ZnO and TiO2 , which can be used to change its optical response. Also, it can be used to sense chemical substances such as ethanol, acetone, formaldehyde, wines among others. As is well known the Fibonacci structure has two defect modes that can be used to sense two types of chemical substances simultaneously, shifting the position of both localized modes to short or long wavelength.

5 Competing Interests The authors declare that they have no competing interests.

Optical and Structural Study of a Fibonacci Structure Manufactured …

319

Acknowledgements This work was supported by postdoctoral fellowship CONACYT. M. R. Jiménez-Vivanco would like to give thanks to Cristina Zorrilla, Roberto Hernández, and Diego Quiterio by their technical assistance. Authors’ Contributions MRJV performed the experiments, analyzed the results, and wore the manuscript. RHB provided the materials and laboratory. MTS participated in the Fibonacci design. KM read and edited the manuscript. JEL analyzed the results, wore, and edited the manuscript.

References 1. Agarwal V, Mora-Ramos ME (2007) Optical characterization of polytype Fibonacci and ThueMorse quasiregular dielectric structures made of porous silicon multilayers. J Phys D Appl Phys 40(10):3203 2. Escorcia-García J, Mora-Ramos ME (2013) Propagation and confinement of electric field waves along one-dimensional porous silicon hybrid periodic/quasiperiodic structure 3. Pérez KS, Estevez JO, Méndez-Blas A, Arriaga J, Palestino G, Mora-Ramos ME (2012) Tunable resonance transmission modes in hybrid heterostructures based on porous silicon. Nanoscale Res Lett 7(1):1–8 4. Escorcia-García J, Mora-Ramos ME (2009) Study of optical propagation in hybrid periodic/ quasiregular structures based on porous silicon. PIERS Online 5(2):167–170 5. Palavicini A, Wang C (2013) Infrared transmission in porous silicon multilayers 6. Tavakoli M, Jalili YS (2014) One-dimensional Fibonacci fractal photonic crystals and their optical characteristics. J Theor Appl Phys 8(1):1–12 7. Huang J, Li S, Chen Q, Cai L (2011) Optical characteristics and environmental pollutants detection of porous silicon microcavities. Science China Chem 54(8):1348–1356 8. Ghulinyan M, Oton CJ, Dal Negro L, Pavesi L, Sapienza R, Colocci M et al (2005) Light-pulse propagation in Fibonacci quasicrystals. Phys Rev B 71(9):094204 9. Ali NB, Alshammari S, Trabelsi Y, Alsaif H, Kahouli O, Elleuch Z (2022) Tunable multiband-stop filters using generalized fibonacci photonic crystals for optical communication applications. Mathematics 10(8):1240 10. Golmohammadi S, Moravvej-Farshi MK, Rostami A, Zarifkar A (2007) Narrowband DWDM filters based on Fibonacci-class quasi-periodic structures. Opt Express 15(17):10520–10532 11. Mehaney A, Hassan AAS (2019) Evolution of low-frequency phononic band gaps using quasiperiodic/defected phononic crystals. Mater Res Express 6(10):105801 12. Singh BK, Rajput PS, Dikshit AK, Pandey PC, Bambole V (2022) Consequence of Fibonacci quasiperiodic sequences in 1-D photonic crystal refractive index sensor for the blood plasma and cancer cells detections. Opt Quant Electron 54(11):1–19 13. Goyal AK, Massoud Y (2022) Interface edge mode confinement in dielectric-based quasiperiodic photonic crystal structure. Photonics: MDPI; 2022, p 676 14. Nava R, Tagüeña-Martínez J, Del Rio J, Naumis G (2009) Perfect light transmission in Fibonacci arrays of dielectric multilayers. J Phys Condens Matter 21(15):155901 15. Salonen J, Lehto VP, Björkqvist M, Laine E, Niinistö L (2000) Studies of thermally-carbonized porous silicon surfaces. Phys Status Solidi (a) 182(1):123–126 16. Salem M, Sailor M, Harraz F, Sakka T, Ogata Y (2006) Electrochemical stabilization of porous silicon multilayers for sensing various chemical compounds. J Appl Phys 100(8):083520 17. Jiménez-Vivanco MR, García G, Morales-Morales F, Coyopol A, Martínez L, Faubert J et al (2022) Tuning wavelength of the localized mode microcavity by applying different oxygen flows. In: Proceedings of the third international conference on trends in computational and cognitive engineering. Springer, pp 445–454

320

M. R. J. Vivanco et al.

18. Jimenéz-Vivanco MR, García G, Carrillo J, Agarwal V, Díaz-Becerril T, Doti R et al (2020) Porous Si-SiO2 based UV Microcavities. Sci Rep 10(1):1–21 19. Jiménez Vivanco MdR, García G, Doti R, Faubert J, Lugo Arce JE (2018) Time-resolved spectroscopy of ethanol evaporation on free-standing porous silicon photonic microcavities. Materials 11(6):894 20. Aliev GN, Goller B (2014) Quasi-periodic Fibonacci and periodic one-dimensional hypersonic phononic crystals of porous silicon: experiment and simulation. J Appl Phys 116(9):094903 21. Sarafis P, Nassiopoulou AG (2014) Dielectric properties of porous silicon for use as a substrate for the on-chip integration of millimeter-wave devices in the frequency range 140 to 210 GHz. Nanoscale Res Lett 9(1):1–8 22. Ruppin R (2000) Evaluation of extended Maxwell-Garnett theories. Opt Commun 182(4– 6):273–279 23. Zhao Y, Gaur G, Retterer ST, Laibinis PE, Weiss SM (2016) Flow-through porous silicon membranes for real-time label-free biosensing. Anal Chem 88(22):10940–10948 24. Robbiano V, Paternò GM, La Mattina AA, Motti SG, Lanzani G, Scotognella F et al (2018) Room-temperature low-threshold lasing from monolithically integrated nanostructured porous silicon hybrid microcavities. ACS Nano 12(5):4536–4544 25. Azuelos P, Girault P, Lorrain N, Dumeige Y, Bodiou L, Poffo L et al (2018) Optimization of porous silicon waveguide design for micro-ring resonator sensing applications. J Opt 20(8):085301

Electronics and Communication

Amyloid-β Can Form Fractal Antenna-Like Networks Responsive to Electromagnetic Beating and Wireless Signaling Komal Saxena, Pushpendra Singh, Parama Dey, Marielle Aulikki Wälti, Pathik Sahoo, Subrata Ghosh, Soami Daya Krishnanda, Roland Riek, and Anirban Bandyopadhyay

Abstract Clathrin protein coats the deposits into vesicles, sorts and carries the cargo to the destination, thus, cleans the waste. The degraded product of the amyloid precursor protein Aβ and its aggregated fibrils are associated with Alzheimer’s disease. Why Clathrin fails to arrest the rapid aggregation is debated. Here, we demonstrate that in contact with a hexagonal-close-packed organic substrate, the Aβ(1-42) fibrils form an electromagnetically responsive fractal superstructure. Using two independent experimental techniques with microwave and laser spectroscopy, we have discovered the electric pulse generating ability (i.e., beating/interference) of the Aβ-fractal networks that fine-tune the Clathrin-mediated disassembly by inducing step-by-step morphogenesis. A fractal antenna network has multiple communication K. Saxena · P. Singh · P. Dey · P. Sahoo · A. Bandyopadhyay (B) International Center for Materials Nanoarchitectronics, MANA, Center for Advanced Measurement and Characterization, RCAMC, National Institute for Materials Science, 1-2-1 Sengen, Tsukuba 3050047, Ibaraki, Japan e-mail: [email protected] K. Saxena · S. D. Krishnanda Microwave Physics Laboratory, Department of Physics and Computer Science, Dayalbagh Educational Institute, Dayalbagh, Agra, Uttar Pradesh 282005, India P. Dey Cancer Biology Laboratory and DBT-AIST International Centre for Translational and Environmental Research (DAICENTER), Department of Biosciences and Bioengineering, Indian Institute of Technology Guwahati, Guwahati, Assam 781039, India M. A. Wälti · R. Riek Department of Chemistry and Applied Biosciences, ETH Zurich, Vladimir-Prelog-Weg 2, 8093 Zurich, Switzerland S. Ghosh Chemical Science & Technology Division, CSIR North East Institute of Science & Technology, 785006, Jorhat, Assam, India Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, UP 201002, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_25

323

324

K. Saxena et al.

modes; beating is essential to disassemble it. Rapid multi-scale synthesis of antennanetwork only in the presence of a typical geometric shape is unprecedented. Our finding sheds light on how brain deposits could suddenly outsmart the natural brain cleansing and how we could bypass the rapid propagation of Alzheimer’s. Keywords Alzheimer’s · Electromagnetic treatment · Amyloid Beta · Clathrin light chain · CLC · Clathrin heavy chain · CHC · Fractal · Microwave · Fabry Perot interferometry

1 Introduction The amyloid beta peptide Aβ, a protease degradation product of the amyloid precursor protein (APP) forms the amyloid fibrils that drive pathological, insoluble, and sticky amyloid-beta plaque, causing Alzheimer’s disease (AD) [1]. Aβ fibrils arrange as tubular and paired helical filament-type structures [2]. Helical symmetry is known as a key to a robust self-assembler [3–5] that could organize complex superstructures [6] from nano-scale to visible centimeters scale. A 106 order growth is often associated with polymorphism, whose molecular origin is C3 symmetry implemented in a π network [7]. Fibril’s nucleation-dependent polymerization8 follows scale-free geometries globally, but local variations [4, 8] are profound even in the fibrillar protofilaments [9, 10]. Consequently, the high-resolution cryo-TEM and solid-state NMR [11–13] showed multiple environment-dependent polymorphisms—reminiscent of a chameleon-like character [3, 5]. Geometry-environment relation of the entangled Aβ fibrils inside the plaques in vivo in Alzheimer’s demands a multi-mode communication [8, 14] Origin of multi-route pathogenesis that has made Alzheimer’s fatally infectious within the brain [15] may lie in its polymorphism [2–14, 16] by which the fibrils can trigger a rapid [16, 17] replication [5] in wide brain regions. The nucleation thereof appears to be an extremely slow process [18, 19], but once a “seed” is generated [16], the “crystallization” [20] process appears fatally rapid via multi-channel communication. Based on the mechanistic nucleation-based polymerization, researchers tried developing a holistic method of disintegrating Aβ plaques or stopping their replication. Different approaches like removal of Aβ fibrils or reducing the number of soluble Aβ [21, 22] are not promising yet. Potential drugs turn ineffective on humans even after successful clinical trials on animals [23, 24]. Immunotherapy thereby provides specific targeting against plaques, but it is only able to mitigate progressive plaque burden and prevent further aggregation without improvement in cognition [25, 26]. Non-invasive electromagnetic (EM) therapy [27–29] is new hope, but why it works is extremely controversial. Our group designed and synthesized nano-engineered intelligent drugs for wireless EM destruction of plaques [30, 31], and explored scalefree electromagnetic resonance properties of tubulin, microtubule, and its bundle [32–34]—a key constituent of the plaques. Exposure to a low-frequency EM field (< 300 Hz) increases the risk of Aβ accumulation [35]. Especially, low-level and

Amyloid-β Can Form Fractal Antenna-Like Networks Responsive …

325

short-term (7–10 days) exposure to microwave at 900 MHz cell phone frequency does not affect cognitive performance [36]. However, the same 900 MHz EM exposure on genetically modified Alzheimer’s mice for a longer period (7–9 months) reduced the Aβ deposits and reversed some of the symptoms of AD [27, 28]. Longterm exposure to 1950 MHz for eight months prevented AD in transgenic mice [29]. Clinical trials of electromagnetic treatment [37] show positive results. Despite much experimental evidence of electromagnetically controlled manipulation of Aβ structures [27–37], there is probably no study of the electromagnetic behavior of the Aβ fractal-like structures [2–14, 16]. Our experiments reveal here that dilution of Aβ (1-42) and contact of a hexagonal close-packed organic substrate naturally triggers the formation of a fractal-antenna-like network. It makes Aβ sensitive to the electromagnetic probes and interference causes pulsed oscillations (beating), which may shed light on the origin of the proposed electromagnetic response and morphogenesis of Aβ plaques in wide brain regions of AD and AD mice. Clathrin transfers the nutrients within and outside the cells, identifying and deforming the targeted protein, and eliminating the cell debris by endocytosis and exocytosis [38]. Here, we present two experimental techniques that independently show the time-dependent disintegration processes of Aβ fibrils by Clathrin light chain (CLC) and Clathrin heavy chain (CHC). Their combinatorial role is revisited here using He-ion imaging of the time-dependent disintegration of Aβ fibrils. The study reveals that CLC disintegrates Aβ aggregates and CHC alters the structural functionality of Aβ aggregates. The CLC and CHC mixture accelerates the disintegration. During the disassembly of the Aβ fractals, the transmitted power through Aβ and CLC interacting regions revealed spontaneous pulsed oscillations (beating). A self-beating type Fabry–Perot interferometer was customized to observe how fractal feature triggers frequency shifts and increases the frequency bandwidth. Consequently, the entire Aβ fractal network charges and scatters photons like a conducting surface when shined at 630 nm laser. Addition of both CLC and CHC to the Aβ-fractal network turns it less attenuating, as a 2D sheet of antenna folds into a vesicle. The laser beam reads the beating frequencies when Aβ aggregates interact with two distinct chains of CLC and CHC. If Aβ aggregates cause toxic neuropathological conditions, microwave-induced beating could assist in fine-tuning the CHC-CLC roles in building Clathrin-coated pits and the disassembly of plaques.

2 Results and Discussions Microtubule in a healthy brain, the Aβ peptide aggregates found in a soluble form (oligomers) are continuously cleaned and removed. However, based on the amyloid hypothesis [39], the persistent, sticky, and insoluble aggregation of Aβ peptides into plaques materializes directly or indirectly via tau aggregation. For example, synaptic dysfunction, resulting in disruption of signaling between neurons, leads to cell death and the onset of brain shrinkage (compare Fig. 1a and b). Finally, it leads to progressive dementia & death. Multiple reports on plaques suggest that

326

K. Saxena et al.

at the molecular scale, the amyloids form a hierarchical helix [2], i.e., a helical molecular assembly adopts an additional twist, just like microtubule, DNA, and other bio-filaments. Helical structures are widely used to build antennas. By serendipity, we discovered that on the HOPG-like hexagonal close-packed surface, a drop of Aβ (1-42) solution forms a fractal structure (see Fig. 2). However, if the same Aβ solution is dropped on a metallic (Au) or a semiconducting surface (Si), then a well-known random mess of plaque fibers appear [30]. The polymeric surface plays a significant role in lowering the nucleation induction time and growing the crystal faster [40]. The HOPG surface prefers orienting the nanostructure along certain crystal faces [41, 42] Templating is fundamental to HOPG. The schematic of Fig. 1c shows the fractal structures grown on graphene flakes, and the corresponding He-ion microscopic image is shown below. When CLC and CHC are added to the solution containing flakes, Clathrin-coated pits are formed by morphogenesis of the fractal network. It is zoomed in the central image of the schematic of Fig. 1c, enlarged in Fig. 2. Finally, these pits fold into vesicles that float and disappear; thus, the area of the pit decreases significantly. A part of the decreased pit area is shown in Fig. 1c. The CLC folds the membrane, and CHC joins to build a lattice made of a hexagon and a pentagon, before covering the membrane

Fig. 1 Schematic for the disintegration of Aβ and experimental setups: a and b The difference in neural networks in the brain under the non-amyloidogenic and the amyloidogenic conditions (recreated images [43]) is reflected in the shrinking brain. c Schematic view of Clathrin-mediated invitro disintegration protocols of Aβ (extracellular conditions), along with the images of antenna-like network captured using a He-ion microscope (enlarged in Fig. 2). d Schematic of an experimental setup for a microwave study. The measuring cavity is the dotted box in the middle where the timelapse spectral response of Aβ+CLC is measured. Spectrum analyzer (green, right) operates from 100 Hz to 6 GHz while Aβ structure is pumped at 500 MHz (pink, left; 103 orders gap between pump and probe). The inset shows transmitted power spectra (S21) as a function of frequency. e The experimental setup (modified Fabry Perot interferometer) for measuring the spectral responses (current flash in the diode is the spectral response shown in the inset) of Aβ, Clathrin, and their combinations using a monochromatic optical laser, 633 nm (ThorLabs commercial set up).

Amyloid-β Can Form Fractal Antenna-Like Networks Responsive …

327

Fig. 2 He-ion microscopic images for the fractal structure of Aβ fibrils on HOPG at multiple lengths scales from ~5 nm to ~300 μm. Three different domains are zoomed in to suggest that actual fibrils that form the elementary structures are ~10 nm

deposit into a Clathrin-coated vesicle. In this work, we investigate the pit-to-vesicle transformation intricately, making an effort to distinguish the roles of CLC and CHC. In the living environment, two chains work together. In vitro, we could emulate hypothetical situations and monitor the pure systemic response, unlike in vivo. We pumped the fractal network at around 500 MHz. Then observed a beating response at a 1000 times higher frequency range using the characterization setup shown in Fig. 1d. Normally, beating happens at much lower than the participating signals (~kHz range, here), then why do we place a detector at much higher frequencies? If an antenna-like fractal structure resonates at frequency υ1 and after a change in its structural symmetry resonates at frequency υ2, two signals interfere. If values of two frequencies are close, an additional pulse signal (with a frequency υ1–υ2) propagates, this is termed a beating signal. Figure 1e shows the Fabry–Perot interferometer, which amplifies the subtle difference in pHz laser signal by 105 times. If shined material shifts the laser for microseconds, the output beating signal in the MHz range is inevitable. Measured beating frequencies were repeated as a function of time for all samples. So, we optically pumped the sample during disintegration and monitored pit-to-vesicle conversion with high sensitivity. These two setups are described in Fig. 1d and e together act as a dual-mode characterization. In one, we monitor molecular scale beating, which reflects the conformational change of fibrils/ networks in the presence of CLC and or CHC. In the other (Fig. 1e), we monitor orbital level energy exchanges between CHC, CLC, and the helical amyloid fiber. These two methods probe the same event while bringing out two fundamentally different physical aspects.

328

K. Saxena et al.

2.1 Live Visualization of Temporal Disintegration of Aβ The size of Aβ fibrils has been observed from a few nm to hundreds of μm using various techniques [7–10, 16]. The star-like structure of fibril aggregates in the 700nm dimension range is reported by Han et al. using TEM [8]. Further, DaSilva et al. [26] and Gelinas et al. [44] captured the images of Aβ distribution in μm scale using a Coolsnap digiting camera mounted to a Zeiss, Axioscope 2 Plus microscope. On the HOPG surface under the He-ion microscope, we found the fractal-antenna-like structures of Aβ fibrils of lengths ranging from ~5 nm to ~300 μm (see Fig. 2). The structures are strikingly similar to the antenna we find on our mobile phones. Highresolution images and XPS spectrum show a composition of C, H, and O which confirmed tubular and paired helical filaments of Aβ fibrils (and no buffer or salt products). Cross-shaped (×) assembly of the paired helical filament of fibrils of different lengths exhibits self-affinity (unequal size-variation in different directions) and self-similarity (object similar to a part of itself). In the living body, graphenelike hexagonal close-packing (HCP) materials are abundant, e.g., cell membranes and microtubule surfaces. By varying the substrate, Aβ fibrils density, and the batch of materials, our data (included in the error bar) indicate that the reported random mess of fibrils is due to the surface condition at which the peptide aggregates [45]. Figure 3 demonstrates the time-dependent dissociation of AβFibril made a cluster of antenna networks. The first column of Fig. 3 shows the starting pre-treatment configuration of Aβ (1-42) in the He-ion microscope. The second column shows the additive material, and the third to sixth columns describe a time-lapse of Aβ visualization post-treatment. Lapsed time is noted top-right of each figure. The disassembly of Aβ fractal networks was studied in the presence of three elements: (i) CLC (Fig. 3a, first row), (ii) sequential treatment of Aβ networks with CHC and CLC (Fig. 3b and 2c, second and third row), and (iii) addition of a pre-mixture of CLC and CHC to Aβ fractals (Fig. 3d, fourth row). During these transitions, we made sure that the He-ion beam is not exposed to Aβ fractals for longer than 15 s. Observations were made at regular time intervals, after 300 s of Clathrin addition, which was decided based on the strength of interaction of Clathrin with Aβ fractals. For CLC, observations were made every 20 s (Fig. 3a), and for CHC, the time interval was 30 s (Fig. 3b). As reported in the literature describing “pit-to-vesicle” conversion, the CLC converts Aβ fibrils by folding or geometric morphogenesis of the entire fractal network into a vesicle, as shown in Fig. 3a. The rightmost structure of the first row in Fig. 3a is the vesicle, which under physiological normal condition forms in vivo, can be easily transported or further disintegrated into smaller vesicles (100–200 nm). The gradual folding of the 2D fractal composed of Aβ (1-42) into a hollow spherical shell suggests that during rounding up, the fractal structure dissolves into the perfectly curved upper surface of the sphere, to pinch it off the deposition site. When mixed with CHC, the fractal Aβ network is converted into a flake or leaf-like structure (Fig. 3b). With time, this flake remained a 2D sheet but changed the organization into different forms. The regularity of the fractal-type structure is then broken. The diameter of branches increased from 50 to 100 nm, i.e., by 100%, possibly via the

Amyloid-β Can Form Fractal Antenna-Like Networks Responsive …

329

addition of CHC, while, unlike CLC, the CHC is incapable of building a vesicle or pit. These results resemble the consequences of a similar study [46], where they mentioned the obligatory role of CLC for maintaining the triskelion shape (C3 symmetry [7]) and the formation of globular assembly in the absence of CHC. The CHC widened the Aβ branches by 300% after 390 s and turned the entire antenna network into a curved flake. As we add CLC in the Aβ flake, the disintegration (the third row of Fig. 3c) converts the flake into a nearly hexagonal structure. The timelapse events for the morphogenesis of flakes show that CLC neutralizes the folding of flakes initiated by CHC. This should not happen because CLC is extremely efficient in converting Aβ fractals into a vesicle. When we premix CHC and CLC together in a solution and as 150–180 nm-sized small spheres of Clathrin (CLC+CHC) came into contact with a nearly 100 times larger network of Aβ, the hexagonal disk forms again after 300 s of mixing and quickly disintegrates within 35 s. The conversion of

Fig. 3 He-ion microscopic visualization of the influence on the Aβ state by the presence of Clathrin Light Chain (CLC) and Clathrin Heavy Chain (CHC): For all images, 30 kV acceleration voltage and 0.2 pA ion beam current was kept fixed in the He-ion microscope. a The Aβ plaque was mixed with CLC, and the process of disintegration was studied after 300 s. of adding CLC into the Aβ sample time-resolved, at a regular interval of 20 s. with dwell time varying from 10 μs at 300 s. to 0.5 μs at 360 s. b The disintegration of Aβ with CHC from 300 s. after mixing at a regular time interval of 30 s. with a dwell time of 30 μs. c Disintegrated Aβ with CHC in step (b) was mixed with CLC and was imaged after 300 s. at an interval of 20 s. d Images of the dissociation of Aβ when mixed with CLC and CHC of mixing at a regular interval of 15 s. The spherical shape of the CLC and CHC combination is shown in the second image from the left in panel (d). Scale bars are noted in the respective images

330

K. Saxena et al.

Aβ networks into a hexagonal disk is a unique transformation of CHC+CLC to fold the flat 2D geometry into a solid 3D sphere, as we see in the last images of Fig. 3c and d. Once the Aβ network forms, adding the mixture of CHC+CLC in any combination or in any sequence fails to convert the pit into a vesicle. Then, Aβ was added further into the mixture of CLC+CHC, and the transformations followed Fig. 3d steps. Figure 3 indicates that if the physical environment is not conducive to CHC+CLC, the only possible treatment route via CHC+CLC would be neutralizing the CHC and canalizing a singular CLC visualization, as observed in Fig. 3a. It is now imperative to learn how one could artificially communicate with the CHC+CLC combo so that they are instructed meticulously to follow Fig. 3a path and not Fig. 3b, c and d. It is a challenging proposition because, in vivo, both the proteins would be co-produced and co-active.

2.2 Spontaneous Emergent Communication in Two Distinct Time Domains Since the 1930s, microwave or nanosecond resonant oscillations of proteins are studied regularly [47]. Literature is rich with the oscillation of protein manifested in the THz range, whereas the oscillations of larger biomolecules made from proteins manifest in MHz to GHz range [34, 48, 49]. Repeatedly we observed that if we pump a protein at a particular electromagnetic frequency, e.g., in the kHz domain, the ripple oscillations are triggered in a very wide range of electromagnetic frequencies. For example, Gramse et al. [50] measured the near-field dipole mobility of the protein membrane in the frequency range from 3 kHz to 10 GHz, when the signal from 3 kHz to 120 MHz was applied to the sample. Similarly, for studying the spectral response of the Aβ aggregates, here we apply 500 MHz excitations, and Aβ responded from 100 Hz to 6 GHz range. The spectral power density of Aβ aggregates decreased to ~14 pW on mixing with CLC. Due to beating, the transmission power is further reduced to ~5 pW with time. Using a high-resolution frequency window, transmission through the material was captured simultaneously in the entire frequency range for 30 min to monitor interactions with Clathrin chains. To avoid unwanted external EM exposure, we kept the solutions inside a multi-layered EM shield kept inside a specially built Faraday cage covered with military-grade EM-absorbing 7-layer composite sheet (see Fig. 4). The schematic and the actual diagram of our experimental setup are shown in Fig. 5a. The 3D plot of the average transmitted power spectra of the CLC+Aβ network is shown in Fig. 5b. It depicts a gradual decline in the transmitted power through the CLC+Aβ network for 30 min. There we two distinct signal bursts from the sample. First ten minutes and the second eighteen minutes after adding CLC with the Aβ network, we find two strong transmissions of the electromagnetic signal through the sample in the two frequency domains (kHz and the 4-GHz range). Since the Aβ network consists of multiple domains of the Aβ fibrillar network, each local segment has upper and lower limits of filament lengths

Amyloid-β Can Form Fractal Antenna-Like Networks Responsive …

331

Fig. 4 Our initial setup was made of two layers repeated multiple times. One layer is sensitive to electromagnetic, and the other layer is sensitive to magnetic burst absorption. Tiny circular metallic rings absorb magnetic signal bursts. Stepwise construction of the Faraday cage, which was built from multiple layers of conducting sheets separated by insulating sheets. It prevents unwanted EM radiation to enter the Faraday cage. Finally, we have attached a composite materials sheet made of seven layers of distinct metals, together, the sheet absorbs major frequency domains of the electromagnetic signal where we carry out our measurement

and similarly a distinct band of frequencies. Fibril’s resonance frequency is a function of the spiral’s length, pitch, and diameter, similar to a helical antenna used widely. To investigate the nature of beating or pulsed oscillations, we plot the raw power response of buffer, Aβ fibrils, CLC, and Aβ fibrils +CLC (see Fig. 6a). Then normalized the response concerning the buffer response (see Fig. 6b). Finally, we zoomed the temporal response at two different frequency ranges (0–1 GHz, supplementary Fig. 5c; and 4.2–4.9 GHz, supplementary Fig. 5d). The linear slope of diminishing resonance peaks of the fractal antenna-like network suggests a methodical structural change. The 500 MHz input excitation has led to the GHz oscillations by the antenna-like network, such amplification of frequencies is not common in biomaterial studies. The probe frequency of 500 MHz is primarily chosen from the study on transcranial EM treatment against Alzheimer’s disease (AD), where the anti-Aβ aggregation, mitochondrial enhancement, and enhancement in neuronal activity on AD transgenic mice have been observed at 900 MHz frequency [28]. The frequency shift was carefully monitored (Table 1). No considerable change in the excitation frequency suggests that the material did not undergo a major change in its dielectric property. The structural symmetry changes but not the dielectric

332

K. Saxena et al.

Fig. 5 Monitoring Aβ fractal degradation in situ at microwave frequencies: Theoretical simulation of fractal structure’s antenna properties: a Schematic and the actual experimental setup. A cylindrical waveguide loaded with a cylindrical sample holder has a radius of 0.5 mm with 1 mm in length and 1 mm. The waveguide was electromagnetically shielded and was placed inside a Faraday cage. A plot shows that pump and probe frequencies are separated by 103 orders. b Time-lapse of average peak power in the frequency range from 0 to 6 GHz when Aβ fractal structure on HOPG is mixed with CLC. c The transmission spectrum of the mixture as a function of time, six plots (top left) show a small section of the entire spectrum. For a particular resonance peak at 534.91 MHz, we plotted two independent experiments. The pure Aβ sample is plotted with blue and green, while the CLC+Aβ mixture’s transmission power loss across the cavity is plotted with red and black. The bottom-most panel shows slopes of temporal degradation as a function of time for different resonance frequencies. A red line depicts a threshold limit, the resonance frequencies below that line are considered for checking theoretical simulations in panel d. See Fig. 6 for details. d The top layer of the panel has three sub-panels. The molecular-scale spiral structure of Aβp is built in the CST simulator, connected to three ports (leftmost). Electric field distribution at resonance (4000 GHz) in the central panel. To the extreme right, the magnetic resonance energy distribution at the same frequency (4000 GHz) is plotted. The bottom row has one panel. The black plot shows the ratio of two neighboring resonance frequencies in the MHz ( 99.5%@633-594 nm, mirror 2: anti-reflective AR@633 nm) of 7.5 mm diameter each placed 45 mm apart (RoC). The rear mirror was connected to a piezodisk driven by 10 V at 100 Hz ramp signal. The optical signal transmitted from the rear mirror of the optical cavity was transduced to electrical signals using a highly sensitive PIN diode (BPW34) in reverse bias. The signal from the diode was fed to a digital oscilloscope. The 60 μl of the sample was taken in a quartz cuvette, which was kept next to the laser head. Measurements were performed at room temperature (25˚C). Change in electrical potential when Aβ interacts with CLC, CHC, and both is shown in Fig. 7b. Preparation of Amyloid Disease relevant Alzheimer’s beta (1-42) amyloid was prepared under the conformation-specific monoclonal antibody library of the CGG laboratory. For Protein expression, the plasmid construct containing an N-terminal His-tag followed

342

K. Saxena et al.

by a solubility tag (NANP) [22] was inserted in Escherichia coli. Additionally, purification was performed based on the method described [51]. The peptide was produced according to the labeling scheme; uniformly 13C, 15N-labeled peptide for the resonance assignment and collection of molecular restraints for the structure calculation by adding 13 C- and 15 N-labeled peptide in a ratio of 1:1. Further, 13 C- and 15 N monomers were diluted in unlabelled peptides in a ratio of 1:3. For fibrillization, the lyophilized material was added to 10 mM NaOH using a sonication bath (three times for 30-s sonication with 50–60% power, interrupted by 1 min cooling on ice). Further, it was ultra-centrifuged for 1 h at 12600G for removing the large aggregates. In order to obtain polymorphism, 100 mM phosphate buffer (pH 7.4) was added to the 150 μM recombinant Aβ (1-42) sample at 37˚C and gently shaking this solution resulted in the fibrillization process [12].

References 1. Canter RG, Penney J, Tsai LH (2016) The road to restoring neural circuits for the treatment of Alzheimer’s disease. Nature 539:187–196 2. Ferrari A, Hoerndli F, Baechi T, Nitsch RM, Götz J (2003) β-Amyloid induces paired helical filament-like tau filaments in tissue culture. J Biol Chem 278:40162–40168 3. Kollmer M et al (2019) Cryo-EM structure and polymorphism of Aβ amyloid fibrils purified from Alzheimer’s brain tissue. Nat Commun 10:1–8 4. Yagi H, Ban T, Morigaki K, Naiki H, Goto Y (2007) Visualization and classification of amyloid β supramolecular assemblies. Biochemistry 46:15009–15017 5. Greenwald J, Riek R (2010) Biology of amyloid: structure, function, and regulation. Structure 18:1244–1260 6. Ghosh S, Dutta M, Ray K, Fujita D, Bandyopadhyay A (2016) A simultaneous one pot synthesis of two fractal structures via swapping two fractal reaction kinetic states. Phys Chem Chem Phys 18:14772–14775 7. Dorca Y, Matern J, Fernández G, Sánchez L (2019) C3 -symmetrical π-scaffolds: useful building blocks to construct helical supramolecular polymers. Isr J Chem 59:869–880 8. Han S et al (2017) Amyloid plaque structure and cell surface interactions of β-amyloid fibrils revealed by electron tomography. Sci Rep 7:43577 9. Serpell LC, Sunde M, Benson MD, Tennent GA, Pepys MB, Fraser PE (2000) The protofilament substructure of amyloid fibrils. J Mol Biol 300:1033–1039 10. Harper J, Wong S, Lieber C, Lansbury P (1997) Observation of metastable Aβ amyloid protofibrils by atomic force microscopy. Chem Biol 6:119–125 11. Riek R, Eisenberg DS (2016) The activities of amyloids from a structural perspective. Nature 539:227–235 12. Walti MA, Ravotti F, Arai H, Glabe CG, Wall JS, Böckmann A, Güntert P, Meier BH, Riek R (2016) Atomic-resolution structure of a disease-relevant Aβ (1-42) amyloid fibril. Proc Natl Acad Sci 113:E4976–E4984 13. Petkova AT, Ishii Y, Balbach JJ, Antzutkin ON, Leapman RD, Delaglio F, Tycko R (2002) A structural model for Alzheimer’s β-amyloid fibrils based on experimental constraints from solid state NMR. Proc Natl Acad Sci 99:16742–21674 14. Schmit JD, Ghosh K, Dill K (2011) What drives amyloid molecules to assemble into oligomers and fibrils? Biophys J 100:450–458 15. Braak H, Braak E (1991) Neuropathological stageing of Alzheimer-related changes. Acta Neuropathol 82:239–259

Amyloid-β Can Form Fractal Antenna-Like Networks Responsive …

343

16. Harper JD, Lieber CM, Lansbury Jr PT (1997) Atomic force microscopic imaging of seeded fibril formation and fibril branching by the Alzheimer’s disease amyloid-β protein. Chem Biol 4:951–959 17. Eimer WA et al (2018) Alzheimer’s disease-associated β-amyloid is rapidly seeded by herpesviridae to protect against brain infection. Neuron 99:56–63 18. Frieden C (2007) Protein aggregation processes: in search of the mechanism. Protein Sci 16:2334–2344 19. Lomakin A, Chung DS, Benedek GB, Kirschner DA, Teplow DB (1996) On the nucleation and growth of amyloid beta-protein fibrils: detection of nuclei and quantitation of rate constants. Proc Natl Acad Sci 93:1125–1129 20. Yu Q, Zhang Q, Liu J, Li C, Cui Q (2013) Inductive effect of various seeds on the organic template-free synthesis of zeolite ZSM-5. CrystEngComm 15:7680–7687 21. Doody RS et al (2014) Phase 3 trials of solanezumab for mild-to-moderate Alzheimer’s disease. New Engl J Med 370:311–321 22. Sevigny J et al (2016) The antibody aducanumab reduces Aβ plaques in Alzheimer’s disease. Nature 537:50–56 23. Karran E, Hardy J (2014) A critique of the drug discovery and phase 3 clinical programs targeting the amyloid hypothesis for Alzheimer’s disease. Ann Neurol 76:185–205 24. Servick K (2019) Another major drug candidate targeting the brain plaques of Alzheimer’s disease has failed. What’s left. Science. https://doi.org/10.1126/science.aax4236 25. Schenk D et al (1999) Immunization with amyloid-β attenuates Alzheimer-disease-like pathology in the PDAPP mouse. Nature 400:173–177 26. DaSilva KA, Brown ME, Westaway D, McLaurin J (2006) Immunization with amyloid-β using GM-CSF and IL-4 reduces amyloid burden and alters plaque morphology. Neurobiol Dis 23:433–444 27. Dragicevic N et al (2011) Long-term electromagnetic field treatment enhances brain mitochondrial function of both Alzheimer’s transgenic mice and normal mice: a mechanism for electromagnetic field-induced cognitive benefit? Neurosci 185:135–149 28. Arendash GW et al (2010) Electromagnetic field treatment protects against and reverses cognitive impairment in Alzheimer’s disease mice. J Alzheimer’s Dis 19:191–210 29. Ji Jeong Y et al (2015) 1950 MHz electromagnetic fields ameliorate Aβ pathology in Alzheimer’s disease mice. Curr Alzheimer Res 12:481–492 30. Ghosh S et al (2015) Resonant oscillation language of a futuristic nano-machine-module: eliminating cancer cells & Alzheimer Aβ plaques. Curr Top Med Chem 15:534–541 31. Ghosh S et al (2018) In-vivo & in-vitro toxicity test of molecularly engineered PCMS: A potential drug for wireless remote controlled treatment. Toxicol Rep 5:1044–1052 32. Sahu S, Ghosh S, Fujita D, Bandyopadhyay A (2014) Live visualizations of single isolated tubulin protein self-assembly via tunneling current: effect of electromagnetic pumping during spontaneous growth of microtubule. Sci Rep 4:7303 33. Sahu S et al (2013) Atomic water channel controlling remarkable properties of a single brain microtubule: correlating single protein to its supramolecular assembly. Biosens Bioelectron 47:141–148 34. Saxena K et al (2020) Fractal, scale free electromagnetic resonance of a single brain extracted microtubule nanowire, a single tubulin protein and a single neuron. Fractal Fract 4:11 35. Sobel E et al (1995) Occupations with exposure to electromagnetic fields: a possible risk factor for Alzheimer’s disease. Am J Epidemiol 142:515–524 36. Sienkiewicz ZJ, Blackwell RP, Haylock RG, Saunders RD, Cobb BL (2000) Low-level exposure to pulsed 900 MHz microwave radiation does not cause deficits in the performance of a spatial learning task in mice. Bioelectromagnetics 21:151–158 37. Arendash G et al (2019) A clinical trial of transcranial electromagnetic treatment in Alzheimer’s disease: cognitive enhancement and associated changes in cerebrospinal fluid, blood, and brain imaging. J Alzheimer’s Dis 71:57–82 38. Schmid SL (1997) Clathrin-coated vesicle formation and protein sorting: an integrated process. Annu Rev Biochem 66:511–548

344

K. Saxena et al.

39. Selkoe DJ, Hardy J (2016) The amyloid hypothesis of Alzheimer’s disease at 25 years. EMBO Mol Med 8:595–608 40. Diao Y, Myerson AS, Hatton TA, Trout BL (2011) Surface design for controlled crystallization: the role of surface chemistry and nanoscale pores in heterogeneous nucleation. Langmuir 27:5324–5334 41. Takenaka Y, Miyaji H, Hoshino A, Tracz A, Jeszka JK, Kucinska I (2004) Interface structure of epitaxial polyethylene crystal grown on HOPG and MoS2 substrates. Macromolecules 37:9667–9669 42. Kim DW, Kim SJ, Choi HO, Jung H-T (2016) Epitaxial crystallization behaviors of various metals on a graphene surface. Adv Mater Interfaces 3:1500741 43. https://newsnetwork.mayoclinic.org/discussion/mayo-clinic-researchers-find-way-to-preventaccumulation-of-amyloid-plaque-a-hallmark-of-alzheimers-disease/ 44. Gelinas DS, DaSilva K, Fenili D, George-Hyslop PS, McLaurin J (2004) Immunotherapy for Alzheimer’s disease. Proc Natl Acad Sci 101:14657–14662 45. Campioni S et al (2014) The presence of an air-water interface affects formation and elongation of α-synuclein fibrils. J Am Chem Soc 136:2866–2875 46. Chu DS, Pishvaee B, Payne GS (1996) The light chain subunit is required for clathrin function in Saccharomyces cerevisiae. J Biol Chem 271:33123–33130 47. Zein I, Wyman J (1931) Studies on the dielectric constant of protein solutions. J Biol Chem 76:443–476 48. Fröhlich H (1968) Long-range coherence and energy storage in biological systems. Int J Quant Chem 2:641–649 49. Saxena K, Kumar M, Daya KS, Bandyopadhyay A (2019) Detection of milimeter wave properties of beta amyloid using dielectric filled truncated cylindrical waveguide. In: 2019 URSI Asia-Pacific radio science conference (AP-RASC) IEEE, pp 1–4 50. Gramse G, Schönhals A, Kienberger F (2019) Nanoscale dipole dynamics of protein membranes studied by broadband dielectric microscopy. Nanoscale 11:4303–4309 51. Harris FE, Alder BJ (1953) Dielectric polarization in polar substances. J Chem Phys 2:1031– 1038 52. Krzysztofik WJ (2017) Fractals in antennas and metamaterials applications. In: Brambila F (ed) Fractal analysis-applications in physics, engineering and technology. Published by INTECH Open Science, Rijeka, Croatia, Print ISBN, 978–953 53. Walti MA, Orts J, Vögeli B, Campioni S, Riek R (1980) Solution NMR studies of recombinant Aβ (1-42): from the presence of a micellar entity to residual β-sheet structure in the soluble species. ChemBioChem 16:659–669. (Braginsky V (1980) Quantum nondemolition measurement. Science 209(4456):547–557)

How Does Microtubular Network Assists in Determining the Location of Daughter Nucleus: Electromagnetic Resonance as Key to 3D Geometric Engineering Pushpendra Singh, Komal Saxena, Parama Dey, Pathik Sahoo, Kanad Ray, and Anirban Bandyopadhyay

Abstract A dividing cell finds precisely the future 3D location to put its daughter cell by sensing the environment far outside its cell boundary. Making such a decision begins at a sub-molecular level of a pair of centrioles, eventually regulating the intricate geometries of a large life form. Thus far, optical imaging and molecular expression delivered little information. Here using theory and experiment, we propose that a scanning dielectric microscope (SDM) may predict the direction where parents would put their daughter with 65% (SD ± 5%) accuracy. The positioning mechanism of a microtubule organization center was monitored live using SDM of the 3D matrices of the hippocampal neuron and a HeLa cell network. We theoretically analyzed electric and magnetic field distributions at resonance for the relative 3D orientations of a pair of centrioles within a cell and also centrioles of the neighboring cells, beyond the optical range. Then microwave imaging revealed that all neighboring cell-centrosomes form a network of coupled vibrations that decides the left–right symmetry, symmetric, and asymmetric cell division. Together with Maxwell’s equation solver, SDM delivers deep insight into the multi-channel signal transmission in biomaterials beyond the optical microscope. For the first time, we combined two P. Singh · K. Saxena · P. Dey · P. Sahoo · A. Bandyopadhyay (B) International Center for Materials Nanoarchitectronics, Center for Advanced Measurement and Characterization, National Institute for Materials Science, MANA, RCAMC; 1-2-1 Sengen, Tsukuba Ibaraki-3050047, Japan e-mail: [email protected] K. Saxena Microwave Physics Laboratory, Department of Physics and Computer Science, Dayalbagh Educational Institute, Uttar Pradesh, Dayalbagh Agra-282005, India P. Dey Cancer Biology Laboratory and DBT-AIST International Centre for Translational and Environmental Research (DAICENTER), Department of Biosciences and Bioengineering, Indian Institute of Technology Guwahati, Assam 781039, India P. Singh · K. Ray Amity School of Applied Science, Amity University Rajasthan, Kant KalwarJaipur, Delhi Highway, Jaipur, Rajasthan NH-11C, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_26

345

346

P. Singh et al.

widely varied physical characterization tools to understand intelligence in biological systems. Keywords Centriole · Centrosome · Cell division · Neuron · Sperm axoneme · Systems biology · Transmittance and reflectance spectroscopy · Electromagnetic resonance

1 Introduction How a cell determines the coordinate of its daughter cells, once the cell division is complete, has always been at the forefront of debates. Moreover, no tool exists to see live how a cell makes such a decision. A zygote’s program for building the final human life form [1] is written in the genes, but the frontier element that instructs a daughter cell is difficult to underpin [2]. In the last phase of a cell division, the centrosome’s prime role was thought to act as the key microtubule-organizing center (MTOC) since surrounding the centrioles an ordered structure of pericentriolar material (PCM) expands and nucleates the microtubules to form a spindle [3–7]. In a centrosome, a pair of centrioles arrange perpendicularly. The arrangement defines the axis of cell division and determines the plane of cytokinesis. Maybe it also decides the daughter’s coordinates? During a cell division, a centriole should divide once, and only once [8], if not for the spindle, why reorient them? At that point, could we speculate that a centrosome without PCM holds the key to deciding the coordinate of its daughter cell? The speculation gets further support since the external environment internally transforms the cells by editing the centriole’s position and orientation [9]. Here we artificially create these structures, solve Maxwell’s equation to find if there is any electromagnetic energy exchange beyond the optical frequency domain and finally experimentally verify the predictions to deliver insight. A program to pre-decide the 3D orientation of cells before a cell division is not required if all daughter cells arrange as an ordered lattice, i.e., varying decisions are essential for building a complex design [10]. It is theoretically proposed that the geometry of a cellular cavity is changed [11] to shift a centrosome to the central location of the modified cavity. Often, in reality, this is found not to be true. Then, the question arises, how such shifts are fine-tuned in a cell? The entire plane of a tissue builds up an intrinsic polarity, namely a planar cell polarity, PCP [12]. The centrosome is moved to a precise location with a proper orientation by microtubule and actin filaments [13]. The challenge is to unveil a sensor to precisely determine the shift needed for a global event when all the neighboring cells coordinate beyond their boundaries. Then find a mechanism to relocate the centrioles and or their adjunct components. However, the process varies from cell to cell so much so that finding a single common mechanism that is invariant of any cellular system would be a critical issue. Instead of a biological point of view here we concentrate only on the biophysical aspect of the event.

How Does Microtubular Network Assists in Determining the Location …

347

In the centriole positioning system, a large-scale construction program has to be written as a minor shift in the 3D angular orientation, but how, no proposal is made thus far. Centriole’s triplets are tilted by 40° with the longitudinal axis of the cylinder (Fig. 1a). Therefore, in a centriole, a longitudinal force would naturally trigger a helical flow of an input field. Some researchers argue that it is a pure mechanical interaction of various components that decides to orient the centrioles perpendicularly [14, 15]. To find a generic protocol, we study various MTOCs. We meticulously built it part by part in the computer simulation software, detailed structures, and artificially pumped noise-like input electromagnetic signal to find if such structure converts them into a biologically meaningful frequency spectrum.

Fig. 1 a A schematic of “PCM around the centriole” and “centriole” is shown in the top left and top right of the panel, respectively. Out of its 27 microtubules, one is zoomed below. A cross section of microtubule (PDB file, which was replicated in CST) is shown to the right along with the tubulin protein (PDB structure). b. The splitting and periodic oscillation nature of the electromagnetic field with the biomaterials appears when the biomaterial is triggered by an EM signal of suitable frequency. c. Global positioning system, when 3 cells undergo division first and second generation

348

P. Singh et al.

1.1 The Natural Magnetic Field of Biomaterials Biomagnetism started in 1969 [16], still, the magnetic study of biomaterials is limited to applying a high magnetic field, forcing the spins to align and change their response. However, even without a high magnetic field [17], heartbeat, and breathing generate waves of low magnetic fields (10−10 T). Mostly a wavelike flow of magnetic field is generated in two ways. Biomaterials may asymmetrically treat electric and magnetic parts of an electromagnetic field or by a periodic flow of ions. The topology of the structures could regulate the beating of charges and generate a wavelike flow of electric and magnetic fields [18–20]. In optical vortex studies, by manipulating the surface topology, electric and magnetic parts interfere separately [21]. Similar structural asymmetries in biomaterials split electric and magnetic parts of an electromagnetic wave and activate the distinct regions of its structure, initiating a typical dynamic flow of fields. Then, one may discover a universal language to interact with a trillion of cells [22]. The magnetic vortex could form around a solid-state defect site [23], here too, the studied since biomaterials are non-magnetic, topological asymmetry generates irregular field distribution.

1.2 When There is no Centriole: PCM Dynamics is Similar to Centriole, It Reflects, Transforms, Do not Destroy Centriole Dynamics In a living cell, centrioles are not alone, they are with PCM. The important concern that arises is that when centrioles are 500 nm PCM surrounding them is an ordered architecture (sharp contrast with previous messy assumptions), that extends up to 2 μm. We carried out the study below to argue that the PCM dynamics is integrated within itself. There is no beyond structure manipulation of electric and magnetic fields simulations by PCM, like that we see in Centriole. Later, in a separate work, we would report in detail how PCM and primitive centriole-dummy resonates. Here is a brief discussion on how PCM and centriole-like materials resonate. It is not only the electric field clocking but also the magnetic field clocking that selects the location of a centriole’s counterpart. Every solid-state structure that is a dielectric material resonates electromagnetically. All biomaterials found in the living systems are either organic or organo-metallic, i.e., dielectric resonance would always be there. Both centriole dummies, i.e., PCL and PCM are ordered structures. This new finding suggests that they engage in organized information processing. For this reason, it became a difficult task for us to confirm what is the exact role of the centriole pair or centriole-dummy pair or dummy-dummy pair, or PCM. We confirmed that PCM reflects and transforms the angular coordinate mechanism of centrioles, it does not hinder or destroy centriole operation. What if there is no centriole, or only one centriole, e.g., mitosis? Plants with no centriole use an eightfold symmetry protein complex HAUS or augmin. These are

How Does Microtubular Network Assists in Determining the Location …

349

centriole adjuncts or dummy for a unique MTOC. Proximal centriole-like (PCL) material acts as a dummy centriole, or the material beneath the nucleus may form a centriole adjunct at a site outside the centriole, but even orient that in a strict 3D angular geometry. Similarly, PCL organizes the electron-dense PCM forming a centrosome equivalent, keeping the relative angular positions strictly defined. Since a spindle decides the position and the orientation of a cell cleavage during cytokinesis, a centrosome reorients with the filaments synchronously to fine-tune cellular growth. Finally, spermiogenesis could change the centriole structure (basal body), making a flagellar axoneme, while the proteins of PCM, γ-tubulin form a nucleus and place the microtubules aligning with the direction of the electric field generated by PCM. We confirmed theoretically the formation of energy traps in the sperm axoneme of microtubules. While constructing a centriole, we have included a dielectric resonator version of multiple proteins other than tubulin. A dimer is made of a pair of SAS-6 proteins and an oligomer is composed of nine such dimers. Using Bld12p protein, a spoke of the wheel is formed by nine such oligomers. Around this scaffold, nine triplets of microtubules reside to form the centriole. The dictation of SAS-6 to form a centriole is a recent observation. Note that while creating an artificial analog of centriole we have included all these components. A centriole transforms into basal bodies, singularly (not pair) assemble cilia and flagella, as in protists like Paramecium, sperm cells (flagella), cilia on the epithelia of the respiratory and reproductive systems. Even the outer segments of rod cells in the eye are modified cilia. The flagellar axoneme is a key to the rhythmic beating-like movement as ATP supplies energy. The structure of PCM: PCM has recently got attention because of high-resolution imaging that revealed ordered architectures in contrast to the popular belief that this annular cylindrical structure is amorphous, made of around 100 proteins around the centriole. PCM1, Pericentriolar material 1 protein; BBS4, Bardet-Biedl syndrome 4 protein; TNKS, Tankyrase-1; TUBE1, Tubulin epsilon chain; Lck, Proto-oncogene tyrosine-protein kinase LCK; TNKS2, Tankyrase-2 are some proteins. A PCM has five primary structures, closely adjacent to the centriole Cep135, Cp110 type proteins reside. Cp110 prefers to create an annular ring at the distal part of centrioles, while Cep135 forms an annular cylinder at the proximity part as a cartwheel to spirals moving radially outward. They wrap around the centriole with a diameter of 200 nm (layer 1). Note that the proximal part goes for centriole division, the daughter forms around this part. Next to this layer, another layer (a concentric cylindrical shape) resides Sas4, spiraling out from the vicinity of the microtubule triplet of the centriole, following nine-fold symmetry. We also find Spd2 and Polo proteins (layer 2, 380 nm in diameter). Dplp or plp proteins make 12-15 nm wide fibrils that spiral out of the vicinity of microtubule triplet of the centriole, following the nine-fold symmetry but ending in the next layer (layer 3, 540 nm in diameter). The third layer contains Asl, Plk4, Cnn, γ -tub proteins. A high concentration of Plk4 on mother centriole, and in between mother and daughter centriole suggests that during centriole division, it is shared between two, mother with different distal protrusion takes Plk4 first, then,

350

P. Singh et al.

leaves it to the other pair. The empty space left by Plp coil of coils is filled by Cnn protein made 15-20 nm wide fibrils in the form of a coil of coils (layer 4, 820 nm in diameter). In the third and fourth layers, γ -tubulin forms ring centers, or γ -TURC, which connects to the negative end of a microtubule, which again spirals out radially. We built artificial structures replicating the ordered layers and complex gel-like scaffolds in the PCM, which are connected to the centriole but extended up to 1 μm. A large coiled-coil structure is built using the proteins and electron-dense nature of most PCM components. Here we do not describe the results but mention the study only to provide the reader a glimpse that PCM which surrounds centrioles massively remains isolated from this global positioning mechanism primarily. PCM part accounts for the secondary effect. A perspective on stem cell asymmetry: During growth or regeneration, the stem cells divide symmetrically, producing two similar copies of the original cell. However, in an asymmetric division, a stem cell makes one daughter as a replica, but the other daughter is programmed to differentiate into a non-stem cell, both daughters are distinct. A part of a tissue called a stem cell niche, reorients the mitotic spindle, which can also determine whether a stem cell division is asymmetric or symmetric by regulating the orientation of the mitotic spindle. We have tried to capture the essence of asymmetry in our study. The current study does not include a stem cell study but maps the asymmetric feature so well that we believe that it might help in future stem cell studies. Centrosome positioning: Literature is rich in the debates of centrosome positioning. The centrosome moves near the neurite that grows into an axon. Defects in the spindle orientation lead to small brain size. Skin’s stratified cell layers are generated by an asymmetric cell division. The spindles are aligned by the centrosome positioning. Several organs like blood vessels, lung, and kidney develop epithelial tubes that branch into a complex network, and oriented cell division regulates such morphogenesis of epithelial tissues. Regardless of the direction of blood flow, the centrioles of blood vessels of epithelial cells are oriented toward the heart, as if the heart is a satellite, and all the blood vessels are signal receivers. The breaking of left–right symmetry in mammalian embryos is driven by the direction of fluid flow through mechanosensory cilia. Similarly, the flow of urine is sensed in the kidney, as the orientation of centrioles allows sensing the direction of a strain. The angular separation (90°) between mother and daughter centrioles changes directing the tail of sperms. Embryos undergo a dramatic change in shape and symmetry during their early growth; in all instances, the polarization of the centrioles shifts significantly. In stem cell research, reorientation of the mitotic spindle plays a crucial role in centrosome positioning and orientation.

How Does Microtubular Network Assists in Determining the Location …

351

1.3 Basic Mathematics to Support the Splitting of the Electromagnetic Field at Resonance Both magnetic and electric parts are symmetric in an electromagnetic wave, both are parts of a single rank 2 tensor, so should be symmetrically distributed all over a material. For an electric field, the concept of "charge" exists as an invariant of a gauge field that can be deduced exactly using the Yang-Mills formulation, through the gauge symmetry and Noether’s theorem, no such invariant exists for the magnetic fields. Once the debate was settled in the 1820s by Kelvin, radical proposals continued to provide the magnetic field an equal status to an electric field. Why current flow that is fundamental in an electric field generates a magnetic flux, why not the phase flow alone that is an integral part of a magnetic field? Regulating a magnetic field by an electric field is achieved, but no law or phenomenon exists to do the opposite or regulate a magnetic field without flowing a current. The reason is embedded in Maxwell’s equations, wherein, a magnetic monopole does not exist. It means if we can cleverly design the inside parts of a circular path making a cylinder or sphere, it is possible to let the electric and magnetic moment-induced waves interfere and redistribute electric and magnetic parts to dominate two distinct regions. Below, we explain the concept using equations. of an electromagnetic wave is given by U =  The energy density  2 2 1 1 0 |E| + μ0 |B| , in an assembly of the microtubule, integration of all dipoles, 2 2   1  3 2 U = 20  E di + E pole di pole  d r , a similar expression would hold for magnetic dipoles. Now, the expression for electric dipole E di pole 

1  − → → 3 → p .− r r −− p ,r > R 3 4π ε0 r p P 1 =− = Pi; r < R 3ε0 4/3π R 3 3ε0

Magnetic dipole, Mdi pole 

1  − → → 3 → m .− r r −− m ,r > R 3 4π ε0 r 2μ0 m 2μ0 = M = Mi; r < R 3 4/3π R 3 3

Outside the sphere, the values of electric and magnetic fields remain the same. However, inside, there is an inverse sign for the moments, so the nature of electric and magnetic fields behaves differently even within the domain of Maxwell’s law

352

P. Singh et al.

(Pi = Mi, when r < R). Now, instead of a spherical shape, if we tune topology such that the rate of change in moments ddtPi and ddtMi are asymmetric as the electromagnetic energy flows inside the structure, we can use the differential nature of electric and magnetic moments for various applications. Sound waves resonate in an acoustic cavity, similarly, electromagnetic wave needs em cavity of length L (E x = E 0 sin(nπ x/L)) where wavelength ends at the boundary.

1.4 Centriole Has the Closed Loop and Spiral (in/out) Pathways of the Electric and Magnetic Fields, Respectively Since PCM does not affect the splitting of electromagnetic fields in the centriole (see text 1.2), all the theoretical models ignore PCM and we study the pure centriolar part. Both parts of the electromagnetic field are identical with an imaginary hollow sphere, the resultant dipolar field is generated there. Mathematically, electric and magnetic moments depend on each other, the electric moment changes by the magnetic moment. However, both moments repeal their sign inside the dielectric √ resonator (centriole), (see text 1.3). In a dielectric resonator antenna [24] (λ/ ), the piecewise linear distribution of elementary dipoles often varies the potential [25] with LogQ, not Q2 /r (Q = charge). Here, in a standalone non-conducting material, the reflecting boundary is not present. So, the static electromagnetic field’s energy prefers a closed loop path and virtually a feedback loop is visible. Such a continuous feedback loop offers periodic oscillation with different amplitudes. From the above calculation, the electric and magnetic moment affects 1/3 and 2/3 fraction of the total energy. These fractions of energy lead to the close loop and spiral behavior, respectively. However, their values vary according to topological (cavity) space. Here, we found a possibility of energy flow in a spiral manner.

1.5 Outline of the Current Study Here we have comparatively studied centriole, and four centriole adjuncts, namely, sperm axonemes of Sciara coprophila and three gall midge fly to show that all the components split the electric and magnetic parts of an electromagnetic field at the resonance. Then the electric and magnetic parts periodically oscillate, two fields locally at two different sites (Fig. 1b). Isolated electric and magnetic fields on the membrane were already studied [26]. We removed PCM (Fig. 1a) from all studies since it does not affect the splitting of centriole fields (see text 1.2), and built five MTOC periodic oscillation systems without PCM. All five MTOCs fail to run the periodic magnetic oscillations if centrioles or adjuncts are not paired perpendicularly.

How Does Microtubular Network Assists in Determining the Location …

353

The spontaneous formation of infrared periodic oscillations was imaged using dielectric resonance microscopy [27–29] in the cultured 3D cellular matrices, for neurons and HeLa cells. Finally, we verified it in three cases where there is no centriole. We imaged live that outside the physical boundary of a centriole adjunct, an energy trap is built for tuning the pairing part. A significant change was observed in the centriole orientation before a cell division and axonal branching. Two primary events, sensing the distant neighbor’s centriole-PCM or MTOC, and editing the neighbor’s coordinates and orientation therefrom, stems from the resonant coupling of distant MTOCs. Theoretically, we observed synchronous periodic oscillations of the magnetic fields between three distant MTOCs as a single unit, and then in the dielectric image, we found its evidence. The finding suggests that the centriole adjuncts if paired, may act as a sensor to couple with similar pairs of distant cells and changes the relative orientations of all centriole adjuncts acting as one unit (Fig. 1c).

2 Methodology 2.1 Theoretical Study Design Asymmetric treatment of electric and magnetic vectors at electromagnetic resonance: If we consider any spiral or fractal path of centriole and centriole adjunct to be a sum of a repeated local symmetry, e.g., an array of disks, the lensing of electromagnetic wave (just like a lens focuses the light beam) [30] is possible. Unidirectional magnetic flux results from the lensing behavior of an electromagnetic field. The dielectric resonator array acts as the disks stack. Then, the charge is constant as Log Q; √ relatively, we find e(z) = e+ eikz + e− e−ikz , k = ω/e με. . So far, extensive studies have been performed on the charge’s LogQ distribution but the studies regarding the spirally charge distribution are not sufficient. It is useful because when an electromagnetic wave passes through such a medium, several waveforms packs within a very small angular width and the phenomenon is triggered by an array of resonators (here for all systems, our elementary resonators are microtubules), which even could lead to a phase discontinuity or singularity [31]. Then, a fraction of transverse component   − → ∂ Ey i ∂ Ex i adds to the longitudinal component. Thus, Bz = kc ∂ y − ∂ x = kc e(z).∇t × E t [32], when r < R (see text 1.3), this topological factor contributes to Pi = Mi.. . So, topological factor not only distributes the charge but also controls the electric field flow and the magnetic field is treated very differently by it. The electric field follows the transverse closed loop while the magnetic field exhibits the longitudinal spatial distribution pathways (radial in/out). These are the key to EM effective lensing. Thus, magnetic and electric fields are very different in nature if the geometry of the material is spiral. At the resonance frequency, the oscillating nature of the field changes in a static nature H 0 .(as we know, He f f = k(H0 − ω/γ ), then at resonance H0 = ω/γ and the angle of rotation or phase θ = γ H1 t).

354

P. Singh et al.

Rebuilding the MTOCs for electromagnetic resonance study: The difference with the conventional studies: The literature about the conventional dielectric resonator shows a single geometric shape as a giant biomaterial, in that case, the results could deliver only a few fundamental peaks of frequencies. Here, sub-nanometer alfahelices are built first to make a single tubulin protein & then several such proteins are used to build the microtubule (Fig. 1a), nine such triplets of microtubules are arranged to build a centriole [33]. A dimer is made of a pair of SAS-6 proteins and an oligomer is formed by nine dimers, a spoke of wheel is formed by nine oligomers using Bld12p protein. Computer simulation technology, CST software is used to build the SAS-6 and Bld12p proteins from scratch like a tubulin protein. Here, we considered the elementary curved geometries (sphere, cylinder, line, etc.) as cavity and dielectric resonators to mimic the biological structure at the nanoscale level. To build a secondary structure of protein α-helices and β-sheets, we took helical tapes and strings, assigned their dielectric properties in CST, and attached a port to pump energy in the simulator. Solving Maxwell’s equation delivers a band of resonance frequencies and a composition of dielectric constants as a function of the resonance frequency. While encoding simulation parameters, we kept the polarization properties of microtubules intact as observed in the experiments, since, adding polarity to the cells is a fundamental feature of biological systems [34]. Therefore, we kept 40° tilted microtubules in a centriole. We incorporated this twist into our model. Finally, two centrioles orient such that the vector direction is additive; we also preserve this property. Note that spindle checkpoints are independent of centriole or its dummy [35] and depend on gamma-TuRC. Hence, we eliminate PCM and study only centriole or its adjunct for finding a global positioning system (Fig. 1c). Theoretical simulation protocol: Centriole assemblies as a wireless coordinate system. The theoretically built microtubules are assembled in CST to create the following components [35–39]: axoneme of three gall midge fly, axoneme of Sciara coprophila, centriole, and cilia and flagella. A specific pattern of microtubules (9 × 2 + 2, called axoneme) is found inside a flagellum and cilium, the microtubules are arranged in a characteristic pattern known as the 9 × 2 + 2, called axoneme (Fig. 2). When a pair of centrioles make a centrosome, a single centriole triggers a doublet of microtubules (not triplet) growing cilium, often recognized as cell antenna [40]. However, how long would it grow is a mystery, though we know the mechanism [41]. We simulated the reflectance and transmittance coefficients S11, S12, S22, and S21, the electric and magnetic fields distribution at resonance frequencies for all five assemblies by shifting the location of the source (called a port) that pumps energy and the sink at various parts [42] on the microtubule assemblies. We detected the proper location of the electromagnetic waveguide ports along the sperm axonemes where we can get the maximum EM resonance response. It confirms the periodic direction of the resonating fields, i.e., field intensity direction varies relativity to phase changes (0° to 360°). Figures 2 and 3 only report interesting port compositions where we get significant results. The detailed study is described in Figs. 4, 5, and 6.

How Does Microtubular Network Assists in Determining the Location …

355

Fig. 2 Five columns for five microtubule assemblies. The biological structures of Centriole, Sciara Corprophila axoneme, and Midge fly (I), (II), (III) axoneme are shown by (1), (2), (3), (4), and (5) respectively, in the first row. Individually by putting the energy supply source from the side, the transverse mode is set up. The second row is about the electric (lower panel) and magnetic (upper panel) field distribution with the structure at the resonance frequencies—153.7THz, 77.5THz, 80.0THz, 83.0 THz, 77.5 THz. The realistic structures of sperm axonemes (1—Centriole-1,— Sciaracoprophila-2, Midge fly (I)—3, and midge fly (III)—4) are shown in the third row. The black and red lines depict the electric and magnetic fields lines, respectively. Electric and magnetic fields lines prefer closed loop and periodic oscillation paths, respectively. The lower half of the bottom panel depicts a detailed field line pattern with structures

356

P. Singh et al.

Fig. 3 The left of panel a and panel b shows the resonance spectrum (S11 (dB) versus frequency (THz)) for all five types of sperm axoneme structures where waveguide ports are positioned at the bottom and side of the structures, respectively. Color code for curves and texts as noted for structure names in both panels a and b are the same. On the right side of panels a and b, the distribution of the spatial energy for the centriole is phase function at the resonance peak 153.1THz (noted by black). The spatial energy distributions of the Sciara Coprophila axoneme and midge fly I, II, and III axoneme are depicted at their resonance peaks (for more details, see Figs. 4, 5). Here, M and E are noted for the magnetic and electric fields, respectively, and high energy density is shown by arrows. Technical details for simulating the structure are provided in the captions of Figs. 4 and 5. From the magnetic field distribution of panel a (right), the percent % of the area covered by a high-intensity field from the center of the structure (red) is plotted. c From the electric field distribution of panel a (right), the angular position of maximum intensity is plotted (blue). d. From the electric field distribution of panel a (right), the linear domain along the centriole diameter where the intensity is maximum is plotted, how it flips between limiting angular range. e Magnetic field distribution (normalized to one) on the circular surface (200 nm diameter) of a centriole at the junction between a pair of centrioles. The plane of measurement is located 40 nm above the centriole top surface

The key periodic oscillations of four types of microtubule assemblies are (i) Sciara Coprophila axoneme—a clockwise spiral rotation of a pair of microtubule sheets; (ii) Midge fly I-axoneme—an anticlockwise rotation of a microtubule sheet and a clockwise rotation of its small part. There are two rotations with opposite directions on a system but in two different parts of it. (iii) Midge fly II axoneme—Fusion of opposite rotating spirals. (iv) Midge fly III axoneme—There is a clockwise spiral. A spiral is embedded in a centriole. Hence microtubule assemblies report the spiral

How Does Microtubular Network Assists in Determining the Location …

357

Fig. 4 (Transverse mode): a The resonance spectrum for all reported structures when the port is placed in a transverse mode is shown in Fig. 2 (source = port). The color codes are noted in the plot. The energy distribution at the resonance peaks for centriole (153.1THz, blue), Sciara Coprophila axoneme (77.5 THz, red), and midge fly I, II & III axoneme (80 (I-green), 83 (II-purple) 77.5THz (III-sky), are depicted in remaining panels—b, c, d, e, f. M = magnetic, E = Electric field, the corresponding phase is noted below each plot, the arrow denotes the highest energy density point. Simulation details a—Selected frequency region = 50–150 THz, waveguide port dimension = 7.9 × 30 nm2 , resonance frequency = 153.1 THz: b Waveguide port dimension = 0.14 × 0.042 nm2 , resonance frequency = 77.5 THz: c Waveguide Port dimension = 0.08 × 0.042 nm2 , resonance frequency = 80 THz: d Waveguide Port dimension = 0.08 × 0.042 nm2 , resonance frequency = 83 THz: e Waveguide port dimension = 0.16 × 0.042 nm2 , resonant frequency = 77.5 THz

features along a clockwise and anticlockwise direction (see Figs. 4, 5, and 6). Both centriole and Axoneme transmit signals in the THz frequency range, near the infrared region; this is consistent with the previous NIR resonance studies [42, 43]. Here, we have detected the resonance in the 5-6THz frequency range because living cells have thermal noise at room temperature ~ 300 K. The 3D EM field distribution and phase modulation are notified at those key peaks in the THz frequency domain which split the energy (see Fig. 1b) and change its phase with the structure. Periodic picosecond periodic oscillations of a THz noise source are integrated by the modulation of microsecond periodic oscillations due to the geometric features of the biomaterials (Fig. 3c). Key features for simulating a centriole assembly: a microtubule-organizing organelle: The γ-tubulin distributes like a cloud around a centriole such that it begins

358

P. Singh et al.

Fig. 5 (Longitudinal mode): a The simulating device of Centriole, Axoneme of Sciara Coprophila, and Midge fly with applying energy source at the bottom or cross section (the black box is port we see the nanowire from the top). b The resonance spectrum for all reported structures, S11 is a function of applied frequency to the port in a longitudinal mode. The resonance peak for centriole 185.49 THz (blue), the Axoneme of Sciara Coprophila 142.98 THz (red), Axoneme of Midge fly 162.88 THz (green). c The directionality of the single-valued function along every reported structure is shown in the panel. The observed directivities values at resonance frequencies are 4.34 dBi, 5.41 dBi, and 3.35 dBi. The remaining panels—d, e, f show the operation of the electric and magnetic fields at the resonance frequency. M and E are symbols of magnetic and electric fields, respectively. An arrow indicates the high energy density region Simulation detail—Selected frequency region = 0–500 THz, a Waveguide port dimension = 600 × 600 nm2 , resonance frequency = 185.49 THz: b Waveguide port dimension = 0.80 × 0.9 nm2 , Resonance frequency where we plotted the magnetic and electric field distribution = 142.98 THz: c Energy source dimension = 0.85 × 0.85 nm2 , resonance frequency where we plotted the magnetic and electric field distribution = 162.88 THz

nucleation of the microtubule, thus, a “daughter” centriole forms adjacent to and at right angles with the parent centriole (distal). The higher plants do make spindles without centrioles. Therefore, we look into the radiation pattern 360° around the artificial analog of the centriole. Different compositions of source and the sink: We describe two kinds of source and sink compositions for all five structures studied here. In a transverse mode where energy is applied perpendicular to the length of the structure Fig. 4 (port at one side, on the cylindrical surface, just like Fig. 3a), and in the longitudinal mode the energy

How Does Microtubular Network Assists in Determining the Location …

359

Fig. 6 Smith chart for all five microtubule assemblies

is applied along the length of the structure (Fig. 5, energy supplied from bottom or cross-sectional area, just like Fig. 3b). Electromagnetic energy is applied from the side of centriole and sperm axoneme for studying the transverse mode: Electric field E, magnetic field M, and directivity: Figure 4a shows the combined resonance plot for all the structures. We can see that a centriole has three sharp resonance peaks, 58 THz, 103 THz, and 153 THz, exhibiting a triangular topology. It means if we change the conformation of the centriole artificially by creating defects, the intensities of the three peaks change, however, the relative frequency gaps between the resonance peaks do not change. This suggests that the phase gap between the three peaks of a centriole remains constant. While centriole has a ~ 50 THz gap in the frequency band, Sciara Coprophila axoneme has a ~ 20 THz gap between its three peaks, thus, if we change the symmetry of the structure manually, the distinction of the gaps disappears. For the three axonemes of Midge flies, there is a pair of peaks and primarily separated by the 25-30THz band. Thus, the centriole stands aside with a complex information processing capability, Sciara coprophila axoneme has a pair of spiraling microtubule sheets. Thus, with three peaks, it is more complex than Midge flies axonemes but less complex than the centriole. Figure 4b–f shows the electric and magnetic fields distributions at a particular resonance frequency with the 20° phase gap in a complete phase cycle. The distributions are plotted at a gap of 20° phase difference from 0° to 360°. In Fig. 1b,

360

P. Singh et al.

centriole, one could observe that a line of high magnetic field rotates all around, however, we have taken a snapshot to reveal another observation. While the highintensity magnetic field line rotates, the intensity of the line blinks too. Two rhythms run in parallel in a centriole. In Fig. 4c, Sciara Coprophila axoneme, the magnetic field flips between the two parts of the structure, as the phase changes from 0° to 360°. The linear oscillatory motion along the diameter of the axonemes cross section is visible. The electric field oscillates in a pair of loops, it attempts to form a closed loop. Along the ring, we find an additional oscillation of field intensity. Dual oscillations, the variation of intensity, and the rotation of the highest density center are characteristic of both the centriole and the Sciara Coprophila. However, in Midge fly, I, II, and III are described in Fig. 4d–f, we see that the global rotation that was characteristic of centriole and Sciara coprophila is absent, we see the periodic blinking of the intensity of both the electric field and magnetic field, while the magnetic field oscillates in an open path, the electric field follows a closed loop. The electric field loop closes and opens as the phase changes from 0° to 360°, for all three cases, and radially outward oscillation of magnetic field distribution is also noticeable. Therefore, the explicit global clocking by clear rotation of the highest energy density point is absent, here in Midge fly we see a much simpler clocking, we expect that such a system cannot regulate complex dynamics around its environment. We have observed the deep resonance peaks for centriole (153.1 THz), Axoneme of Sciara Coprophila (77.5 THz), and midge fly I, II, and III axoneme (80 THz, 83 THz, and 77.5 THz). We noted that the magnetic field is more around the point of convergence of the centriole at the initial phase. Then, the magnetic field monotonically decreases to 800 phase. It starts again increasing from 1000 to 1800 phase. Further around 2000 to 2600 , the magnetic field again shows the decrementing order. The electric field also shows a similar nature, but in an opposite phase to the magnetic field. Similar phase quantization has been reported in the basic constituents like tubulin protein, microtubule, and further larger complexes of microfilaments like axons in our previous study. So, the clock is a fundament topological feature that appears scale-free. The second interesting finding is the increment and decrement order of energy density depending on the phase angle with the nonappearance of the magnetic field at certain phases. The magnetic field concentrates around the center of the centriole structure, the electric field is absent there for an overall phase cycle. The dominance of the magnetic field is a unique finding. Isolation of fields is continuously held within the structure of a Sciara coprophila too. In a complete phase cycle, a magnetic field is limited to various zones over the structure. The third important finding is that for all structures, the magnetic and the electric energy spread from the point of energy source to the overall structure during one half-phase cycle. This is a spin-half-like behavior as it repeats a comparable nature for the second half-phase cycle. Electromagnetic energy is applied from the center of centriole and sperm axoneme for studying the longitudinal mode: Resonance band, directivity, and clocking

How Does Microtubular Network Assists in Determining the Location …

361

behavioral characteristics. In the second scenario, in Fig. 5 we repeat the same study described above but with a difference. All structures under the current study appear like long cylinders, previously we attached the port for pumping electromagnetic energy on the cylindrical surface. Now, we put the probe around the crosssectional area, at one end. Thus, the energy is pumped along the length of the tube, horizontally. The applied energy source is kept at the center point of the centriole, axoneme of Sciara Coprophila, and Midge fly I as shown in Fig. 5a. Here we have presented only one case of Midge fly because none of the three axonemes respond to resonance. We did not find good resonance behavior in them as shown in Fig. 5b. Still, we find a faint clocking behavior in the Sciara Coprophila, but the Midge fly is silent. In Fig. 5b, the centriole exhibits a singular sharp phase transition at around 200THz. We do not call it an explicit resonance, although every resonance is associated with a phase transition, here the resonance behavior shows sharp pulses of radiation from a centriole. To understand the subtle change in the behavior of centriole in the longitudinal pumping, we measure the field directivity. Directivity measures the degree of radiation transmitted is concentrated in a single direction. The directivity maps at various resonance frequencies are inspected here in Fig. 5c. We estimated the directivity at resonance frequencies 185.49THz, 142.98 THz, and 162.88THz are 4.34dBi, 5.41dBi, and 3.35dBi respectively, which suggests that the triangular topological response has not disappeared for centriole (Fig. 5d), and the global resonance of Sciara Coprophila is evident (Fig. 5e), while the Midget fly remains a faint interactor (Fig. 5f). Smith chart for transverse mode when we see the clocking effect. As support of Fig. 4, we present Fig. 6 where we demonstrate that the Centriole could generate dualphase operations simultaneously. The rest of the microtubule assemblies do not show such a complex control of the electromagnetic resonance behavior of the materials. The plots suggest the clocking of electric and magnetic fields in all assemblies.

2.2 Experimental Methods for centriole’s Global Positioning System What is the dielectric image? How does it help in matching theory and experiment? Dielectric resonance imaging [27–29] provides a 2D surface profile for dielectric constant, but by changing its mode, we can simply set it to collect reflection and transmission co-efficient S11, S22, S12, S21, or simply strongest resonance frequency distribution on a surface or material [17]. Now, the advantage of setting alternate mode is that by theory, we can generate how a material would look like if we consider S profiles. Therefore, a one-to-one comparison would be simple, and direct, without any analysis. 3D cell culture of HeLa and neuron cells. The gap between in vivo study of animal models and a classical 2D cell culture could be filled by the synthesis of advanced

362

P. Singh et al.

3D cell systems. We use a 3D cellular matrix to confirm the proposed 3D global positioning system of a centriole pair. Using cultured HeLa cells and embryogenic hippocampal neuron cells in a 3D matrix, we have experimentally verified that centrioles reorient collectively for all participating cells following the way shown in Fig. 3. In a cultural dish 1 mm x 1 mm x 1 mm, we pump the electromagnetic signals at resonance frequencies and observe the cellular growth of HeLa cells and the assembly formation of neurons in a separate 3D matrix. Extracellular Cell Matrix (ECM) Gel from Engelbreth-Holm-Swarm murine sarcoma is used to build a 3D cellular matrix. The protein concentration in the ECM gel is 8–12 mg/mL, with major and minimum components such as collagen-type IV, entactin, and heparin sulfate proteoglycan. The polymerization could be increased by adding collagen-type IV to the ECM gel. The ECM gel is kept overnight at 2–8 °C and then dispensed into a multiwall plate with the help of pre-cooled (2–8 °C) pipettes and plate. The gel was diluted up to twofold with 2–8 °C Dulbecco Modified Eagles Medium, along with the neuron embryogenic cells (Lonza Inc) or the HeLa cells before the gel mixture is added to the culture plate. The mixture converts into a gel within 5 min at 20 °C, but the culture takes 6–7 days. Hence it was a controlled day-by-day addition of gel to create a solution, and acidified culture media were removed using a microporous film using capillary action. For prolonged manipulations, the work was conducted below 10 °C. Since we culture the cells inside a matrix, the cells and get are mixed at a density of 3–4 × 104 cells /mL before plating. For dielectric imaging, HOPG substrates were used for plating the HeLa and neuron cells. Dielectric resonance imaging of HeLa and Neuron cells. We observe the 3D matrix for HeLa and embryogenic neuron cells using a well-known scanning dielectric microscopy, SDM [27–29], which captures a frozen 3D structure carrying the signature of how centrioles are deciding about placing the daughter post-cell division, or neighboring neurons. SDM is different from fluorescent imaging [44], where one does not use a fluorescent molecule, the material resonates to emit an electromagnetic signal, visible to the scanner. Thus, any subtle change in the orientation of a centriole by 0.01° in a 3D matrix is detected by a change in the resonance frequency in the dielectric image. Since a change in an angle or the position of a centriole is estimated by a change in the frequency, not intensity, thus, the position is measured with a negligible detection error. Any change in the centriole orientation is revealed in the dielectric image of the 3D matrix made of HeLa cells or neurons. We monitor the coordinates of the daughter centriole of the HeLa cell, or neuron cell by monitoring the dielectric resonance image. A time series of images reveals a one-to-one correspondence between the resonance frequency and the directivity of a system of centrioles. In 15 sets of 3D matrices (18 cases for HeLa, 23 geometric branches of the neuron cells were studied in total), we monitored the centrioles of three pairs of cells or six cells and observed the origin of clockwise and anticlockwise twist of the cells as shown in Figs. 7, 8.

How Does Microtubular Network Assists in Determining the Location …

363

Fig. 7 a Normalized electric (red) and magnetic (green) field distribution on a cross-sectional area (diameter 200 nm), for the six consecutive relative orientations of a pair of centrioles shown in panel b (start second from left). A white line is drawn along the diameter with a white ball marker that follows the darkest black region, i.e., weakest field domain. A similar red ball movement in panel b shows one-to-one correspondence. b Theoretical models of a pair of centrioles built-in CST, which were simulated to acquire the simulated electric and magnetic fields distribution of panel a. c The mechanism of changing two angular parameters θ and is shown in a pair of spherical coordinates formed between two distinct cells. Each cell is represented by a pair of centrioles, their cross section is now a new r, represented by a new sphere. There is a blue cone indicator on which the centrioles are placed at two different angular parameters θ and , both the cone and the sphere have the same r. d A generalized spherical coordinate system

3 Results and Discussions 3.1 Resonance Characteristic of Five MTOCs There are five columns in Fig. 2; a different microtubule assembly is represented by each column. In this figure, the raw data of the electromagnetic field distributions at resonant frequencies is presented. The field distribution in each of the built structures has its own 3D raw data. The specific behavior of the EM field distribution with the structures proves the hypothesis that the geometry of biomaterials is the key to

364

P. Singh et al.

Fig. 8 a Switching between left–right symmetry is shown for a cell using two schemes. Six pairs of centrioles from three cells P, Q, and R, the center of centriole pairs is connected using a line. The directions of daughters follow an extrapolation of this line in Fig. 8d. b The THz periodic oscillation of electromagnetic field for all six pairs of centrioles (orange) run by much slower GHz periodic oscillations (green), which integrates into an MHz periodic oscillation (pink). The nested periodic oscillations are shown as a schematic. c The switching of directivity of a system as we physically move HeLa cells in Fig. 8d, it is experimental verification of the schematic demonstrated in Fig. 8a. 1–2-3 clockwise and 4–5 are anticlockwise. d Dielectric resonance images of HeLa cells with scale bar 80 μm. e Microscopic image of embryogenic hippocampal neuron cells. For image 1, the corresponding dielectric resonance image is given below. The scale bar for the first three images of the column (1, 1, and 2) is 80 μm, and for the last image (3) scale bar is 60 μm. Panel d and e have a common color code for dielectric imaging, its discrete coloring as noted in between two panels

the splitting of the electric and magnetic parts from the electromagnetic wave. The schematic plotted at the bottom of Fig. 2 shows the nature of energy splitting by a specific pattern of microtubule assemblies. Magnetic field distribution forms a tear drops pair oriented face to face, however, the electric field follows the closed loop in the sperm axoneme of Midge fly III. In the case of Sciara coprophila axoneme, the electric field pattern looks like a closed loop while another incomplete loop tries to be complete. The spiral pathways look like S is represented by the magnetic field. In the case of the midge fly I axoneme, we found that the electric field forms teardrop loops without generating the oscillating spirals. For the centriole, a closed circular loop is formed from the electric field while the magnetic field forms a spiral in/out through the center and the outer boundary of the centriole, respectively. The left panels of Figs. 3a, b summarize the resonance bands for five microtubule assemblies when the energy supply port [45–47] is on the top and at the sides, respectively. Maximum polar radiance appeared on the sperm axoneme of Midge

How Does Microtubular Network Assists in Determining the Location …

365

fly II and III. (Figs. 2, 7, and 8) that means those geometries select a path to pump the resonance energy in a particular direction. While minimum intensity energy is radiated through the axonemes of midge fly I and Sciara coprophila, we studied other four assemblies as control, because it is evident now that only a pure geometry enables radiating the resonant energy in all three directions for centriole. Moreover, it is a unique radial wheel of SAS-6, Bld12p proteins that distinct centrioles from other assemblies. The variation of electromagnetic field distribution, i.e., E and H parts at the resonance frequency is plotted in Fig. 3a (side port) and Fig. 3b bottom port), respectively. The electric and magnetic fields distribution of a centriole is detected by keeping a 20° phase gap in a complete phase cycle (0° to 360°). If we place the waveguide port along the centriole in the longitudinal direction then the magnetic field’s periodic oscillation is absent there (Fig. 3b). If the direction of the waveguide port is perpendicular then the periodic oscillation returns (Figs. 3a, 2). Thus, the input energy direction is the prime factor to sense the 3D neighbors. Smith charts [48] for all five structures show a closed loop, depicting a periodic oscillation behavior (see Fig. 6), and the centriole has two loops. The dual-phase control loop ensures auto-correction. We have summarized the distinct flow of electric and magnetic fields in Fig. 3c. The percentage of area change shows that the magnetic field blinks radially from the center while the electric field moves along the perimeter of the centriole. The period for rotation is around microseconds. Thus, the periodic oscillation of fields is much slower than the THz resonance. During periodic oscillation, the SAS-6 and Bld12p protein made linear part of the centriole absorbs energy and releases it, along the diameter, the magnetic field periodically oscillates as shown in Fig. 3d. We have created an active surface on the centriole, as shown in the plot Fig. 3e, and estimated the combined potential for the three kinds of centriole periodic oscillations observed in Fig. 3c, d. It provides a spatial 3D field distribution at the junction between the two centrioles. Three domains on a 3D prolate shape would get equal angular distribution (120°) of radiation all over, an essential requirement for forming the spindle. In a centriole, the electric field oscillates periodically around its cylindrical surface. However, the magnetic field distributes as if it radiates from the center. At the same time, the axonemes build pure clockwise or anticlockwise rhythms of electric and magnetic fields at resonance. All these MTOCs serve their purpose of generating unique microtubule bundles for their specific functional purpose. Hence Fig. 2 profile is a seed grammar that could unfold a very distinct global positioning system very different from each other. Here, we look into centrioles only.

366

P. Singh et al.

3.2 Study of a Pair of Centriole Assemblies: The Spherical Coordinate System Our model is inspired by the astral model of spindles [49]. The electric and the magnetic field distribution on the common circular area between a pair of centrioles is plotted in Fig. 7a. Two centrioles facing each other at various angles, as shown in Fig. 7b, edit the 3D distribution of electric and magnetic fields at their junction. In Fig. 7a, the circular area at the junction between two centrioles acting as a 3D cavity develops a dark region with null fields that moves linearly along the diameter of the circle. The linear movement of the silent domain follows the linear movement of centrioles relative to each other, as outlined in Fig. 7b. Thus, we get the linear parameter r for setting a spherical coordinate system [50], (r,θ, ). The other two angular parameters θ and are also read from the 3D plot. The peak of the magnetic field (green) surrounding the silent domain in Fig. 7a changes the intensity of the peak proportional to the relative angular changes between a pair of centrioles. Hence, we get θ. A pair of peaks around a silent point undergoes a relative rotation if we rotate one centriole keeping the other static. A 360° planar rotation of a pair of peaks represents . We have explained the three parameters using a schematic in Fig. 7c. To understand θ and imagine the white dot at the junction between the pair of centrioles shown in panel a is held at a fixed location. Still, the relative angular position of one centriole relative to another is changed. This is understood if one keeps the lower centriole of panel b fixed and rotates the upper centriole along the surface of the cone shown in panel Fig. 7c. Then the angular width of the cone is

; the angular rotation of the cone along its central axis is θ. The three parameters, the motion of a silent domain, the intensity of the magnetic field, and the rotation of a pair of peaks constitute the spherical coordinate system (r, θ, ), respectively (Fig. 7d).

3.3 Triplet of Cells: How Left–right Symmetry and Directivity of Coupled Cells Are Born In Fig. 8a, we extend the spherical coordinate system to a triplet of cells scenario, to detect the origin of left–right symmetry. Three cells P, R, and Q have a pair of centrioles each, just before a cell division, and a subtle change in their relative orientation changes the directivity of the cells [51] plotted in Fig. 8c. The field distribution appears as if each centriole acts as a periodic signal generator in the architecture of periodic oscillations [52] demonstrated in Fig. 8b. In Fig. 3a and b, THz resonance is obtained. However, the most important part is that the spatial distribution of the THz field at resonance follows a completely different frequency in the MHz domain, as shown in Fig. 3c. However, the integration of periodic oscillations from picoseconds to microseconds is not that direct, there is a bridging periodic oscillation at GHz, we reported such protein periodic oscillations earlier [53]. The geometry of periodic

How Does Microtubular Network Assists in Determining the Location …

367

oscillations shown in Fig. 8b undergoes a switching behavior during cellular growth, changing the directivity of emitted energy as shown in Fig. 8c, we carry out 3D cell culture of neuron and HeLa cells to confirm such switching.

3.4 A Review of the Spherical Coordinate System (Fig. 1c) The cross section of Fig. 7a between a pair of centrioles is advanced to the cross section between a pair of cells in Fig. 7c. That particular cross section is plotted in Fig. 8a for three pair of cells, namely, (P1-P2), (Q1-Q2), and (R1-R2), i.e., total six cells. Here the images depict just before cell division how three cells P, Q, and R would look like. The rotational direction is plotted from the experimental observation of 18 assemblies of HeLa cells where it shows that the lines connecting (P1-P2), (Q1-Q2), and (R1-R2) get momentum from the center of the circle, a vortex-like effect [21] was noted. Now we try to switch off the phenomenon or restrict the effect with a defined parameter. Figure 8c is such a proposition and Fig. 8d,e is a piece of experimental evidence supporting the theoretical finding. Orientation could be controlled by an external field.

3.5 3D Cell–Matrix Study: Orientation Angle Shift and Linear Shifts in the Spherical Coordinate System We synthesized two types of 3D cellular matrices, following the methods described in the experimental section above. First, the neuron cells and second, the HeLa cells. In Fig. 8d, e we present the dielectric resonance images of the neuron cells and the HeLa cells as they grow [17, 27–29]. The red colors in the dielectric images (red = 2.5 THz) of Fig. 8d, e directly depict the orientation and the location of the centriole. The gradient of red color = θ, the difference between the maximum and minimum intensity = , and the center position of the red color bar = r. We measure the shift in angular orientations and linear position ( θ, , r) [49] between two panels here as described in Fig. 7 since these three parameters reveal the decisionmaking process of the centrioles. We measure the centriole position & the orientation change by 5°, 3°, and 1 nm when the HeLa cell divides (18 cases studied) and the embryogenic hippocampal neuron cell embryos decide where to grow its axon (23 cases studied) by changing 2°, 8°, and 1.3 nm. A dielectric image using SDM is a 2D resonance frequency map of a living cell’s components without adding any chemical marker or making any physical contact. SDM identifies each component and its dynamics in the vibration map. The orientation and linear shift of centrioles (Fig. 8a) noted above remain constant for multiple generations of cell division and branching of neurons; thus, the determined periodic oscillation directions [52] for both the types of cells (see the arrow, Fig. 8d, e) are conserved. A fluxgate sensor measured

368

P. Singh et al.

the magnetic field (10–9 T) connecting the centrioles across the cell boundaries as predicted in theory. The fluxgate sensor was built using a magnetic particle-doped atomic resolution sensor in the dielectric scanner [17]. It confirms that the spherical coordinate system of a centriole is driven by a magnetic field-induced far-distant coupling. This is why in Fig. 8d, e, we find that the centrioles or red patches of all participating cells are at their border as if they are directing where the daughter cell or an axon would grow. The electromagnetic field damps in a fluid, but an isolated magnetic field could wirelessly couple the distant elements without damping [54]. Consequently, the cells separated by 300 μm could reorient each other’s centriole. The electric vector of the incident electromagnetic signal reclines in the centriole surface planeand interacts available there then we get the magnetic  with the electron − → ∂ Ey i ∂ Ex i field Bz = kc ∂ y − ∂ x = kc e(z).∇t × E t [32]. The relative geometry of centrioles minimizes the electric vector; the electromagnetic wave changes its electrical nature to a magnetic one. Isolating the magnetic part of an electromagnetic wave [21] in an integrated cellular system is the key to a global cell positioning system proposed here. 3D cell–matrix is noisy, our ultimate confirmation requires live imaging of Fig. 8a, where we should see three cells and six centers orienting relative to each other. We are still hunting for luck to see such three cells with six centriole pairs at the right moment for a long time.

3.6 Protein’s Electromagnetic Resonance is a Nearly a Century-Old Concept Electromagnetic resonance of the proteins and their complexes have been measured since the 1930s [25]. Microtubule resonance does survive inside the axon of a neuron [53, 55–57], and it regulates neuron firing [17]. Therefore, in the living system, the key component of a centriole and an axoneme (microtubule) is itself an electromagnetic resonator. The 3D cell–matrix study in Fig. 8d, e suggests that the neuron or sperm cell core could be structured in the cell assembly using the electromagnetic resonance property. An MTOC is not just a microtubule organizer for the spindle, PCM does that, the centriole pair or centriole-adjunct pair acts as a long-range electromagnetic field sensor. Possibly, that makes the centrosome an infrared sensor [43], and the microtubule is called a nerve of the cell [17]. Our finding suggests that a molecular origin is a key to global positioning by the centrioles and to know more about this, we need more studies in the future.

How Does Microtubular Network Assists in Determining the Location …

369

4 Conclusion Isolated fields periodically oscillate at resonance by a centriole-pair or centrioleadjunct pair interacts with the neighbor’s fields to collectively set the axis and plane of cell division of its daughter cell (Fig. 1c). The notion of a fixed perpendicular orientation of centrioles is now replaced with a variable orientation (r, θ, ) that edits the neighbor’s periodic oscillation. By shifting the relative orientations locally, a centriole pair could maximize or minimize the field elsewhere at far. Clockwise and anticlockwise rotations of fields, we observed here, hold one of the key mysteries in the left–right asymmetry, essential for adding complexity to cellular growth. The selection of a particular periodic oscillation direction has a geometric origin. The fractal symmetry of the five centriole-adjunct pair regulates the "time" not just within, but also in the assembly of its final structure. All the MTOCs modulate the phase, manage the interference-induced fields outside their physical boundary, and remain coupled. A shift in the local orientation of a centriole-pair or centriole-adjunct pair governs the MTOC’s global energy exchange. The centriole is primary, and PCM is secondary. Finally, inside the centriole or its dummy, the rhythmic non-chemical oscillation pervades—ionic biorhythms do not stop at the millisecond’s domain, but continue to operate deep inside via electromagnetic oscillations of the functional groups—thus covering a wide space and time ranges. Spatial isolation enables the bio-systems to create possibly a chain of periodic oscillators, wherein they use ions for running the slower periodic oscillations and use electrons for operating the faster periodic oscillations. Acknowledgements Authors acknowledge the Asian office of Aerospace R&D (AOARD) a part of United States Air Force (USAF) for the Grant no. FA2386-16-1-0003 (2016–2019) on the electromagnetic resonance-based communication and intelligence of biomaterials. Conflict of Interest: There is no conflict of interest among the authors.

References 1. Cabernard C, Prehoda KE, Doe CQ (2010) A spindle-independent cleavage furrow positioning pathway. Nature 467:91–94 2. Feldman JL, Geimer S, Marshall WF (2007) The mother centriole plays an instructive role in defining cell geometry. PLoS Biol 5(6):e149 3. Mennella V, Keszthelyi B, McDonald KL et al (2012) Subdiffraction-resolution fluorescence microscopy reveals a domain of the centrosome critical for pericentriolar material organization. Nat Cell Biol 14(11):1159–1168 4. Sonnen KF, Schermelleh L, Leonhardt H et al (2012) 3D-structured illumination microscopy provides novel insight into architecture of human centrosomes. Biol Open. 1(10):965–976 5. Fu J, Glover DM (2012) Structured illumination of the interface between centriole and pericentriolar material. Open Biol. 2(8):120104 6. Lawo S, Hasegan M, Gupta GD et al (2012) Subdiffraction imaging of centrosomes reveals higher-order organizational features of pericentriolar material. Nat Cell Biol 14(11):1148–1158

370

P. Singh et al.

7. Erickson HP (2009) Size and shape of protein molecules at the nanometer level determined by sedimentation, gel filtration, and electron microscopy. Biol Proced Online 11:32–51 8. Tsou MF, Stearns T (2006) Mechanism limiting centrosome duplication to once per cell cycle. Nature 442(7105):947–951 9. Tang N, Marshall WF (2012) Centrosome positioning in vertebrate development. J Cell 125(21):4951–4961 10. Calarco-Gillam PD, Siebert MC, Hubble R, Mitchison T, Kirschner M (1983) Centrosome development in early mouse embryos as defined by an autoantibody against pericentriolar material. Cell 35:621–629 11. Laan L et al (2012) Dogterom cortical dynein controls microtubule dynamics to generate pulling forces that position microtubule asters. Cell 148:502–514 12. Bayly R, Axelrod JD (2011) Pointing in the right direction: new developments in the field of planar cell polarity. Nat Rev Genet 12:385–391 13. Piel M, Meyer P, Khodjakov A, Rieder CL, Bornens M (2000) The respective contributions of the mother and daughter centrioles to centrosome activity and behavior in vertebrate cells. J Cell Biol 149:317–330 14. Tenenbaum ME, Medema RH (2010) Mechanism of centrosome separation and bipolar spindle assembly. Dev Cell 19(6):797–806 15. Dinarina et al (2009) Chromatin shape the mitotic spindle. Cell 138(3):502–513 16. Barnothy ME (1969) Biological effects of magnetic fields, vol 2. Plenum Press, New York 17. Agrawal L et al (2016) Inventing atomic resolution scanning dielectric microscopy to see a single protein complex operation live at resonance in a neuron without touching or adulterating the cell. J Integr Neurosci 15(04):435–462 18. Welch DR et al (2012) Simulation of magnetic field generation in unmagnetized plasmas via beat-wave current drive. Phys Rev Lett 109:225002 19. Wang R et al (2015) Magnetic dipolar interaction between correlated triplets created by singlet fission in tetracene crystals. Nat Commun 6(8602):1–6 20. Henke W et al (1981) Effect of collision and magnetic field on quantum beat in biacetyl. Chem Phys Lett 77(3):448–451 21. Nye JF (2017) The life-cycle of Riemann-Silberstein electromagnetic vortices. J Opt 19(11):115002 22. Kim et al (2010) Biofunctionalized magnetic-vortex microdiscs for targeted cancer-cell destruction. Nat Mater 9(2):165–171 23. Im et al (1989) Symmetry breaking in the formation of magnetic vortex states in permalloy nanodisk. Nat Commun 3:983 24. Keyrouj S, Caratelli D (2016) Dielectric resonator antennas: basic concepts, design guidelines, and recent developments at millimeter-wave frequencies. Int J Anten Propagat. https://doi.org/ 10.1155/2016/6075680 25. Richtmeyer RD (1939) Dielectric resonator. J Appl Phys 10(6):391–398 26. Blank M (1995) Electric and magnetic field signal transduction in the membrane Na,K-ATPase. Adv Chem 250:339–348 27. Asami K (1995) Dielectric imaging of biological cells. Colloid Polym Sci 273:1095–1097 28. Lahrech et al (1996) Infrared-reflection mode near field microscopy using an aperture less probe with a resolution of λ/600. Optics Lett 2:1315–1317 29. Cho Y, Kirihara A, Saeki T (1996) Scanning non-linear dielectric microscopy. Rev Sci Instrum 67:2297 30. Choi JS, Howell JC (2015) Paraxial full-field cloaking. Opt Express 23(12):15857–15862 31. Yu N et al (2011) Light propagation with phase discontinuities: generalized law of reflection and refraction. Science 334(6054):333–337 32. Kildal PS (2017) Foundations of antenna engineering: a unified approach for line-of-sight and multipath. ISBN: 978-91-637-8515-3 33. Kitagawa D et al (2017) Steinmetz. Structural basis of the 9-fold symmetry of centrioles. Cell 144(3):364–375

How Does Microtubular Network Assists in Determining the Location …

371

34. Witte H, Bradke F (2008) The role of the cytoskeleton during neuronal polarization. Curr Opin Neurobiol 18(5):479–487 35. Balanis CA (2005) Antenna theory: analysis and design. Wiley-Interscience, New York 36. Dallai R (2014) Overview on spermatogenesis and sperm structure of hexapoda. Arthropod Struct Dev 43(4):257–290 37. Lanzavecchia S, Dallai R, Bellon PL, Afzeliuss BA (1991) The sperm tail of a gail midge and its microtubular arrangement studies by two strategies of image analysis (cecidomyiidae, dipteral, insecta). J Struct Bio 107:65–75 38. Gomes LF et al (2012) Morphology of the male reproductive system and spermatozoa in centris Fabricius, 1804 (Hymenoptera: Apidae, Centridini). Micron 43:695–704 39. Zhang BB, Hua BZ (2017) Spermatogenesis and sperm structure of Neopanorpa lui and Neopanorpa lipingensis (Mecoptera: Panorpidae) with phylogenetic considerations. Arthropod Syst Phylogeny 75(3):373–386 40. Ishikawa H, Marshall WF (2011) Ciliogenesis: building the cell’s antenna. Nat Rev Mol Cell Biol 12:222–234 41. Zheng et al (2016) Molecular basis for CPAP- tubulin interaction in controlling centriolling centriolar and ciliary length. Nat Commun 16(11874):1–13 42. Albrecht-Buehler G (1994) Cellular infrared detector appears to be contained in the centrosome. Cell Motil Cytoskeleton 27(3):262–271 43. Albrecht-Buehler G (1998) Altered drug resistance of microtubules in cells exposed to infrared light pulses: are microtubules the “nerves” of cells? Cell Motil Cytoskeleton 40(2):183–192 44. Lana L (2012) STED microscopy with optimized labeling density reveals 9-fold arrangement of a centriole protein. Biophysical J 102(12):2926–2935 45. Singh P, Ray K, Fujita D, Bandyopadhyay A (2018) Complete dielectric resonator model of human brain from MRI data: a journey from connectome neural branching to single protein. Lect Notes Electr Eng 478:717–733 46. Singh P et al (2018) Fractal and periodical biological antennas: hidden topologies in DNA, wasps and retina in the eye. Soft computing applications, Springer, Singapore, pp 113–130 47. Singh P et al (2020) A self-operating time crystal model of the human brain: can we replace entire brain hardware with a 3D fractal architecture of clocks alone? Information 11(5):238 48. Muller A et al (2012) The 3D smith chart and its practical applications. Microw J 55(6):64–74 49. Hallen MA, Endow SA (2009) Anastral spindle assembly: a mathematical model. Biophys J 97(8):2191–2201 50. Newell AC (1998) Spherical coordinate systems for defining directions and polarization components in antenna measurements. AMTA. https://www.nsimi.com/images/Technical_Papers/ 1998/1998SPHERICALCOORDINATESYS.pdf 51. Veysi M, Jafargholi A (2012) Directivity and bandwidth enhancement of proximity-coupled microstrip antenna using metamaterial cover. Appl Comput Electromagn Soc J 27(11):925–930 52. Arshavsky Y, Berkinblit MB, Kovalev SA, Chailakhyan M (1964) Periodic transformation of rhythm in a nerve fiber with gradually changing properties. Biofizika 9:365–371 53. Ghosh S et al (2016) Inventing a co-axial atomic resolution patch clamp to study a single resonating protein complex and ultra-low power communication deep inside a living neuron cell. J Int Neuro 15(4):403–433 54. Sun Z, Akyildiz IF (2009) Underground wireless communication using magnetic induction. In: Proceedings of the IEEE ICC 55. Sahu S et al (2013) Atomic water channel controlling remarkable properties of a single brain microtubule: correlating single protein to its supramolecular assembly. Biosens Bioelectron 47:141–148 56. Sahu S et al (2013) Multi-level memory switching properties of a single brain microtubule. Appl Phys Lett 102(12):123701 (1–4) 57. Sahu S, Ghosh S, Fujita D, Bandyopadhyay A (2014) Live visualizations of single isolated tubulin protein self-assembly via tunneling current: effect of electromagnetic pumping during spontaneous growth of microtubule. Sci Rep 4:7303 (1- 9)

Computational Study of the Contribution of Nucleoside Conformations to 3D Structure of DNA J. A. Piceno, A. Deriabina, E. González, and V. Poltev

Abstract To evaluate the contribution of the molecular structure and the conformational preferences of nucleosides to the formation of various conformational classes of DNA local 3D structure, the search of energy minima of four separate nucleosides have been performed. Different methods of molecular mechanics (three AMBER force fields) and quantum mechanics (MP2 level of ab initio and DFT) were used. The calculations showed that the main results and conclusions do not depend on the method used. The minimum energy structures for each nucleoside have been found both in syn and anti-base-deoxyribose orientations, and with C3’-endo and C2’-endo sugar puckering. For pyrimidine-nucleosides minima with syn-orientations correspond to intramolecular H-bond. For purine nucleosides minima with syn-orientation both with and without such H bonds have been found. The structures with intranucleoside H bonds are possible only at the 5’-end of polynucleotide chain. The most favorable structures without H bonds correspond to nucleoside conformations in classical B-form of DNA duplex, as well as in some other conformational classes of DNA minimum fragments. The role of molecular structure and conformational possibilities of canonical nucleosides in relation to unique properties of DNA macromolecule is discussed. Keywords DNA · Nucleosides · Conformations · Molecular mechanics · Quantum mechanics · Purine · Pyrimidine · Energy minimization

1 Introduction The DNA duplex is a complex of two antiparallel complementary polynucleotide chains. The polynucleotidic DNA chain consists of aromatic bases (two purines and two pyrimidines), deoxyribose sugars, and phosphate groups. Changes in the torsion J. A. Piceno (B) · A. Deriabina · E. González · V. Poltev Faculty of Physical and Mathematical Sciences, Autonomous University of Puebla (BUAP), Puebla 72570, México e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_27

373

374

J. A. Piceno et al.

angles of sugar-phosphate backbone lead to different conformational classes of DNA [1]. To understand the role of numerous conformational variations in the DNA functions, it is important to evaluate the contribution of its individual components to the formation of 3D structure [2]. In previous years, it has been found using computational methods that the minimum fragment of double helix (deoxydinucleoside monophosphate, dDMP) can exist in several conformations, including those different from Watson–Crick’s canonical B and A ones [3]. In this study, we are searching for energy minima of separate deoxynucleosides to evaluate the contribution of these subunits to the conformational variability of local 3D structure of DNA duplex. We use here two groups of computational methods, namely, Molecular Mechanics (MM) and Quantum Mechanics (QM) [4]. The main conclusions suggested by the results obtained by all the methods used are the same. We didn’t try to find all the possible energy minima, instead, we focused on the conformations related to experimental results on nucleosides, nucleotides, and DNA fragments. For each nucleoside, we found three minima which correspond to the DNA conformational classes BB00, AA00, and BB02 [5].

2 Method Two models of molecular structure can be used to study molecular systems that are important for molecular biophysics. One of them is the molecular-mechanics model, with a low cost of computational resources, which allow us to model geometric structures, interaction energy for various molecular systems, including oligonucleotides complexes. Another one is quantum-mechanics model, which is based on solving the Schrödinger equation using approximations and powerful computational tools. This last one can be used for modeling rather small molecules due to its high resource consumption. The molecular geometry can be assigned in terms of bond length, valence angle, and torsion angle. Six principal torsion angles are defined to describe the sugarphosphate backbone (SPB) configuration (δ, ε, ζ, α, β, γ), δ angle being related to the deoxyribose ring puckering [6]. Another important angle of SPB and separate nucleoside is glycoside angle χ, around the bond between C1’ of sugar and N of the attached base. Three nucleoside angles (χ, δ, γ) are important for the classification of DNA conformational classes (Fig. 1). The initial structures that we use in this work were obtained from atom coordinates of nucleosides, of nucleotides, and of DNA fragments in crystals. These coordinates obtained by X-ray diffraction experimental techniques are collected in the databases, such as Protein Data Bank (PDB) [7], Nucleic acid DataBase (NDB) [8], and Cambridge Structural Database (CSD) [9].

Computational Study of the Contribution of Nucleoside Conformations …

375

Fig. 1 Designations of torsion angles for dDMP (dCpdA as an example, left) and for deoxynucleoside (deoxythymidine as an example, right)

2.1 Molecular Mechanics In Molecular Mechanics method (MM) we use classical mechanics to emulate molecular systems, regardless of its size and complexity. We can consider this method like an extension of a simple atom and bond representation of molecules and their complexes in classical chemistry [10]. We consider atoms as soft spheres and take into consideration the Born–Oppenheimer approximation [11]. Under this approximation, the nucleus is thought to move in the fixed field generated by the average electron densities which are rapidly changing with respect to the motion of the nucleus. Thus, the nuclear motions are determined in terms of a potential energy surface (PES), which only depends on the positions of the atomic nuclei and is expressed by an effective potential where the electronic effects are considered [12]. In MM, system’s potential energy is calculated as a function of the nuclear coordinates using force fields (Eq. 1), this function includes bonding and non-bonding components. E(r ) =



El +



Eθ +



Eϕ +



Ee +



E vdW

(1)

Here, El represents changes in energy due to the chemical bonds, E θ –the valence angles, E ϕ –the torsion angles, E e stands for the electrostatic interaction energy component, and E vdW –for the van der Waals interactions. The energy terms depend on mutual positions of atoms and on the adjustable parameters. A variation in atom positions leads to a change in potential energy. That way, when the structure represents a minimum on the surface of system’s potential energy, we consider the structure as optimized. The electric charges of atoms in the nucleosides were determined by calculations with the appropriated restricted electrostatic potential (RESP) and published

376

J. A. Piceno et al.

by Cornell’s et al. in the Ref. [13]. These charges were used together with other parameters of BSC1, OL15, and FF99 force fields to make the geometrical optimizations and to find the energy minima using the AMBER [14] software. The FF99 force field was developed starting from the second-generation force field [13], adjusting the partial charges to an Electrostatic Surface Potential (ESP) with certain approximations to reduce the computational cost. A lot of improvements in both the sugar puckering and glycosidic torsion angle parameters result in the FF99 force field [15], which is used then as the base for other force fields. In this evolutive path, the improvement of new parameters was developed by the two scientific groups: the first one in Spain with the force fields identified by the acronym BSC (Barcelona Supercomputing Center), and the second group in Czech Republic which includes the acronym OL (from Olmuc city) in the names of the force fields [15]. The Barcelona group presented a reparameterization of two torsion angles, the force field bsc0. And then, both groups published some additional modifications for FF99-bsc0, the Barcelona group published the BSC1 force field in 2015 [16], which included the improvement of sugar puckering and glycosidic angle parameters. The other group also presents modifications of those and other parameters, developing the OL15 force field in 2015 [17]. The computer software and force fields referred to as “AMBER” (Assisted Model Building with Energy Refinement) are the most popular of those designed for wide class of biomolecules including proteins and nucleic acids [10].

2.2 Quantum Mechanics Description of the interactions occurring in biomolecules, at electronic level, is a task corresponding to Quantum Mechanics. QM principal objective is solving the time independent Schrödinger’s equation and determining the electronic structures of atoms and molecules. However, the systems with more than two particles have no analytic solution, therefore certain approximations are used. The first one is Born– Oppenheimer approximation, where the significant difference between nucleic and electronic masses is taken into consideration. For solving the system’s wave function yet another approximation is needed; in this one molecular orbital is considered as a linear combination of atomic orbitals (LCAO). Since no analytic form can be given for atomic orbitals, numeric models are needed, like the Hartree–Fock (HF) self-consistent field method [18]. The Moller–Plesset perturbation theory (MP) is an improvement for HF method that adds electron correlation effects by means of Rayleigh-Schrödinger perturbation theory (RS-PT), usually of second (MP2), third (MP3), or fourth (MP4) orders [19]. These orders of MP calculations are standard levels used for calculating small systems and are implemented in many computational chemistry codes. Higher levels of MP calculations are possible in some codes. However, they are rarely used because of their cost.

Computational Study of the Contribution of Nucleoside Conformations …

377

The ab-initio methods described above have some computational limitations for making calculations with large basis sets in molecules that consist of many atoms and electrons. For this problem, an alternative method is known, where the problem is proposed to be solved through electron density probability. This method is known as Density Functional Theory (DFT). Though past few years, a variety of more accurate functionals were developed to consider the correlation exchange energy between electrons; these functionals used in DFT have different names, one of the most used is the functional Perdew-BurkeErnzerhof, known as PBE [20]. We used PBEPBE functional and 6-31G* basis set for DFT method, and 6-311 G basis set for MP2 method, using the Gaussian software [21].

3 Results Various conformations of minimal energy for each nucleoside have been found corresponding to both syn and anti-base-deoxyribose orientations and to both C3’endo and C2’-endo sugar puckering. For pyrimidine-nucleosides minima with synorientations correspond to intramolecular H-bond, while for purine nucleosides minima with syn-orientation both with and without of such H bonds have been found. For all nucleosides considered minimum energy structures have been revealed which correspond to different conformational classes of DNA elementary fragment as described recently [4].

3.1 Energy Minima of Deoxynucleosides Corresponding to Syn Sugar-Base Orientation For purine-nucleosides we have obtained energy minima with syn and anti-sugarbase orientations; in pyrimidine-nucleosides minima with syn-orientation correspond only to structures with H-bond between base and sugar. This H-bond appears between HO5’ of sugar and N3 of purine or O2 of pyrimidines, the distances between H5’ and N3/O2 atoms varies from 1.7 to 1.9 Å. The O5’- H5’…N3/O2 angle is close to 170°. In Fig. 2, we display two examples of such minimum energy conformations with C2’-endo sugar puckering, for purine nucleoside (dG) and for pyrimidine one (dC). The conformation obtained for deoxyadenosine is close to that of dG, and similar conformation for deoxythymidine is close to dC. The interactions involved in the formation of such H-bond are crucial for these minima. Nucleoside conformations without this intramolecular H-bond (with fixed β torsion) have substantially less favorable energies. We do not study the characteristics of these minima in more

378

J. A. Piceno et al.

Fig. 2 Minimum energy conformations of deoxyguanosine (left) and of deoxycytidine (right) with intramolecular H-bond

details as their formation is impossible in a DNA chain (with exception of 5’-end position). In the tables below, we list the values of relative energies (E, kcal/mol), the nucleoside torsion angles (δ, ε, β, γ, and χ), and characteristics of sugar ring geometry. We set the minimum energy of BB00 conformation class for each nucleoside obtained by the same method as zero, so E values are presented for other two conformational classes. For the columns with headers C2’ and C3’, we present the distance between those atoms and the plane formed by three other atoms of the deoxyribose ring (in Angstroms, Å). Such distances are considered as positive for displacement toward C5’, and negative for opposite direction. In the last columns sugar ring pseudo-rotational angles, P, and regions of ring puckering are presented. We found that minimum energy structures with C2’-endo sugar are similar to those listed in Table 1, but all such conformations cannot exist in duplex DNA fragments, they can exist only at the 5’-end of polynucleotide chain. Besides, there are minima for dA and dG with glycosidic angle in syn region in nucleoside crystals and in DNA fragments that don’t form H-bond, but the minima with intramolecular H-bond are more favorable. For dC and dT we found only H-bonded conformations with synorientation, these structures don’t exist neither in deoxynucleoside crystals nor in regular DNA regions.

Computational Study of the Contribution of Nucleoside Conformations …

379

Table 1 Energy minima of deoxynucleosides corresponding to syn sugar-base orientation and intermolecular H-bond obtained with AMBER force field BSC1 Nuc

E

χ

β

γ

δ

ε

C2’

C3’

Puckering

dC

+0.5

59

58

49

147

195

0.48

−0.11

C2’-endo

dT

+0.8

58

57

51

147

198

0.53

−0.06

C2’-endo

dG

−6.8

57

48

55

146

189

0.56

−0.02

C2’-endo

dA

−4.3

59

55

51

147

193

0.52

−0.07

C2’-endo

3.2 Energy Minima of Deoxynucleosides Corresponding to Three Conformational Classes of dDMPs The values for torsion angles, C2’ and C3’ distances, and the differences between the energy value of the minimum and the energy for BB00 conformation class, for all four deoxynucleosides are shown in the following tables: BB00 in Table 2, AA00 in Table 3, and BB02 in Table 4. The line < NDB > contains average value for torsion angles of corresponding conformational class of dDMPs considered. The second column designates the computational method used. Exp stands for the corresponding values in experimental structures of nucleoside crystals. Minimum energy conformations obtained by all the methods used for BB00 class of dDMP have the glycosidic angle in anti-region (around 220°). These values correspond to those in nucleoside crystals but differ substantially from those of DNA fragments. The displacement of the angle χ in DNA helix from values in nucleoside minima is caused by the interactions with other subunits of the duplex. The values of γ angle correspond to gauche+ region (around 50°) as in BB00 and some other classes of dDMPs. The values of β and ε torsion angles correspond to trans region as in dDMPs of these classes; minima with these angles in other regions have less favorable energy. The χ, δ, γ torsion angles in minimum energy nucleoside structures corresponding to AA00 conformational class are closer to those in DNA fragments than those for BB00 one. The energy values obtained by MM methods for all nucleosides are less favorable as compared to BB00 class. The deviations from this rule obtained for dC by QM methods should be cleared up using more rigorous methods, but preliminary estimations using more extended basis set in MP2 calculations confirmed the results shown in Table 3. The comments to the results for BB02 class are nearly the same as for BB00 one. The most important difference in torsion angles between two classes corresponds to γ angle (change region from gauche+ to gauche− ) is reproduced well by all the methods. Not all the results obtained by various force fields are listed, but all the regularities in the variations of torsion angle and energy are the same as for BSC1 force field.

380

J. A. Piceno et al.

Table 2 Energy minima of deoxynucleosides corresponding to BB00 conformation class Nuc

Method

E

< NDB >

χ

β

γ

δ

ε

258

180

44

138

183

C2’

C3’

P

Puckering C2’-endo

dC

Exp

222

272

62

148

209

0.50

−0.12

168

C2’-endo

dC

BSC1

0

215

180

56

143

157

0.62

0.04

159

C2’-endo

dC

FF99

0

217

179

53

138

171

0.64

0.14

152

C2’-endo

dC

OL15

0

217

178

55

138

170

0.65

0.15

152

C2’-endo

dC

PBE

0

217

178

54

143

168

0.50

0.06

163

C2’-endo

dC

MP2

0

212

177

53

146

168

0.56

0.02

161

C2’-endo

dT

BSC1

0

221

179

55

144

171

0.60

0.01

160

C2’-endo

dT

FF99

0

221

179

52

140

178

0.59

0.08

156

C2’-endo

dT

OL15

0

225

178

55

140

178

0.60

0.09

155

C2’-endo

dT

PBE

0

233

175

51

144

172

0.47

−0.09

167

C2’-endo

dT

MP2

0

233

174

50

146

172

0.56

−0.03

163

C2’-endo

dA

Exp

181

202

186

158

283

0.04

−0.52

196

C3’-exo

dA

BSC1

0

220

174

55

145

170

0.56

−0.03

163

C2’-endo

dA

FF99

0

221

174

51

140

178

0.58

0.07

156

C2’-endo

dA

OL15

0

229

173

54

142

178

0.51

0.00

161

C2’-endo

dA

PBE

0

235

174

50

146

172

0.37

−0.20

174

C2’-endo

dA

MP2

0

232

173

51

148

172

0.46

−0.13

169

C2’-endo

dG

BSC1

0

224

181

54

145

175

0.56

−0.03

162

C2’-endo

dG

FF99

0

226

172

50

141

181

0.52

0.01

160

C2’-endo

dG

OL15

0

234

170

53

142

181

0.48

−0.03

163

C2’-endo

dG

PBE

0

238

171

49

146

173

0.37

−0.20

174

C2’-endo

dG

MP2

0

235

171

50

148

173

0.45

−0.14

170

C2’-endo

4 Discussion The calculation results obtained by rather different computational methods demonstrate the existence of minimal energy structures corresponding to the most known and the most populated BI and AI conformations of DNA duplex (BB00 and AA00 conformational classes). This conclusion once more demonstrates how surprisingly the molecular and 3D structures of DNA fit its biological functions. The mutual positions of subunits of DNA double helix correspond to local energy minima of interaction energy of its fragments, e.g., of bases in complementary pairs, of sugarphosphate backbone, of elementary fragment of single chain, and, as demonstrated here by various methods, of all the separate nucleoside molecules. The most favorable local minimum of nucleosides, compatible with polynucleotide structure, corresponds to classical B-form duplex. The exclusion of QM results for one of nucleosides doesn’t change this conclusion. In case that this exclusion would be confirmed by

Computational Study of the Contribution of Nucleoside Conformations …

381

Table 3 Energy minima of deoxynucleosides corresponding to AA00 conformation class Nuc

Method

E

< NDB >

χ

β

γ

δ

ε

200

173

55

82

206

C2’

C3’

P

Puckering C3’-endo

dC

Exp

201

294

57

82

202

−0.09

0.49

13

C3’-endo

dC

BSC1

+1.79

205

181

56

83

175

−0.06

0.55

15

C3’-endo

dC

FF99

+0.67

209

181

54

85

180

−0.02

0.51

18

C3’-endo

dC

OL15

+0.34

197

184

57

85

178

−0.13

0.45

2

C3’-endo

dC

PBE

−0.53

198

176

53

83

187

−0.13

0.45

11

C3’-endo

dC

MP2

−0.49

195

175

53

80

186

−0.12

0.51

12

C3’-endo

dT

BSC1

+1.49

211

179

56

86

182

−0.04

0.55

17

C3’-endo

dT

FF99

+1.06

211

180

54

86

185

−0.01

0.51

18

C3’-endo

dT

PBE

+0.54

201

174

55

83

191

−0.11

0.46

12

C3’-endo

dT

MP2

+0.87

199

172

52

80

189

−0.10

0.52

13

C3’-endo

dA

Exp

195

141

176

82

61

−0.09

0.50

13

C3’-endo

dA

BSC1

+1.43

205

178

56

84

183

−0.16

0.46

9

C3’-endo

dA

OL15

+0.54

202

178

56

85

185

−0.15

0.42

9

C3’-endo

dA

PBE

+0.43

211

169

50

85

193

−0.17

0.40

8

C3’-endo

dA

MP2

+0.29

193

175

53

81

191

−0.20

0.45

8

C3’-endo

dG

BSC1

+1.52

205

177

55

85

189

−0.17

0.55

9

C3’-endo

dG

FF99

+0.59

211

176

53

87

189

−0.15

0.40

9

C3’-endo

dG

OL15

+0.76

213

173

55

87

189

−0.11

0.43

11

C3’-endo

dG

PBE

+0.75

213

166

49

85

196

−0.15

0.41

9

C3’-endo

dG

MP2

+0.62

199

169

51

81

194

−0.15

0.48

10

C3’-endo

more rigorous methods, it will be considered as a reason for “A-philic” properties of GC-rich sequences. Earlier we demonstrated that local energy minima of elementary fragment of DNA sugar-phosphate backbone and dDMPs correspond to BI and AI conformations of DNA [4]. Our new finding presented here can be considered as an addition to characteristics of amazing and inexhaustible DNA, the principal molecule of life.

382

J. A. Piceno et al.

Table 4 Energy minima of deoxynucleosides corresponding to BB02 conformation class Nuc

Method

E

χ

β

γ

δ

ε

253

195

277

150

194

C2’

C3’

P

Puckering C2’-endo

dC

BSC1

+2.11

202

187

300

150

168

0.69

0.04

159

C2’-endo

dC

FF99

+3.73

203

188

299

143

178

0.78

0.22

149

C2’-endo

dC

OL15

+1.71

190

186

300

144

176

0.80

0.24

149

C2’-endo

dC

PBE

+1.72

192

180

291

150

171

0.48

−0.13

169

C2’-endo

dC

MP2

+2.13

190

183

293

154

170

0.43

−0.20

173

C2’-endo

dT

BSC1

+2.55

210

189

301

150

185

0.68

0.03

159

C2’-endo

dT

PBE

+3.02

196

182

292

148

175

0.52

−0.09

166

C2’-endo

dT

MP2

+3.65

197

184

293

153

174

0.52

−0.11

168

C2’-endo

dA

BSC1

+1.36

200

187

301

151

183

0.66

0.01

161

C2’-endo

dA

PBE

+2.62

207

182

292

148

177

0.56

−0.05

164

C2’-endo

dA

MP2

+2.68

181

184

294

156

175

0.34

−0.30

179

C2’-endo

dG

BSC1

+1.01

206

187

301

150

192

0.66

0.01

160

C2’-endo

dG

PBE

+2.54

232

183

292

147

180

0.60

0.00

161

C2’-endo

dG

MP2

+2.89

193

185

294

154

177

0.45

0.18

172

C2’-endo

Acknowledgements The authors gratefully acknowledge the Laboratorio Nacional de Supercomputo del Sureste de Mexico (LNS), a member of the CONACYT national laboratories, for computer resources, technical advice, and support.

References 1. Sinden RR (1994) DNA structure and function, 1st edn. Academic Press 2. Poltev V, Anisimov VM, Dominguez V, Ruiz A, Deriabina A, Gonzalez E, Garcia D, Rivas F (2021) Understanding the origin of structural diversity of DNA double helix. Computation 9:98. https://doi.org/10.3390/computation9090098 3. Watson JD (1968) The double helix: a personal account of the discovery of the structure of DNA. Athenaeum, New York 4. Poltev V, Anisimov V, Domínguez Benítez V, Gonzalez E, Deriabina A, Garcia D, RivasSilva JF, Polteva N (2018) Biologically important conformational features of DNA as interpreted by quantum mechanics and molecular mechanics computations of its simple fragments. J Mol Model 24:46. https://doi.org/10.1007/s00894-018-3589-8 ˇ 5. Cerný J, Božíková P, Svoboda J, Schneider B (2020) A unified dinucleotide alphabet describing both RNA and DNA structures. Nucl Acids Res 48:6367–6381. https://doi.org/10.1093/nar/ gkaa383 6. Saenger W (1984) Principles of nucleic acid structure, 1st edn. Springer, New York 7. Berman HM, Westbrook J, Feng Z, Gilliland G, Bhat TN, Weissig H, Shindyalov IN, Bourne PE (2000) The protein data bank. Nucl Acids Res 28:235–242. https://doi.org/10.1093/nar/28. 1.235

Computational Study of the Contribution of Nucleoside Conformations …

383

8. Coimbatore Narayanan B, Westbrook J, Ghosh S, Petrov AI, Sweeney B, Zirbel CL, Leontis NB, Berman HM (2014) The nucleic acid database: new features and capabilities. Nucl Acids Res 42:D114–D122. https://doi.org/10.1093/nar/gkt980 9. Groom CR, Bruno IJ, Lightfoot MP, Ward SC (2016) The Cambridge structural database. Acta Cryst Sect B 72:171–179. https://doi.org/10.1107/S2052520616003954 10. Poltev V (2017) Molecular mechanics: principles, history, and current status. In: Handbook of computational chemistry, pp 21–67.https://doi.org/10.1007/978-3-319-27282-5_9 11. Teller E, Sahlin H (1971) Physical chemistry an advanced treatise. Academic Press, New York 12. McQuarrie DA (1976) Statistical mechanics, 1st edn. Harper & Row, New York 13. Cornell WD, Cieplak P, Bayly CI, Gould IR, Merz KM, Ferguson DM, Spellmeyer DC, Fox T, Caldwell JW, Kollman PA (1995) A second generation force field for the simulation of proteins, nucleic acids, and organic molecules. J Am Chem Soc 117:5179–5197. https://doi. org/10.1021/ja00124a002 14. Case DA, Aktulga HM, Belfon K, Ben-Shalom IY, Brozell SR, Cerutti DS, Cheatham TEIII, Cisneros GA, Cruzeiro VWD, Darden TA, Duke RE, Giambasu G, Gilson MK, Gholke H, Goetz AW, Harris R, Izadi S, Izmailov SA, Jin C, Liu J, Luchko T, Luo R, Machado M, Man V, Manathuga M, Merz KM, Miao Y, Mikhailovskii O, Monard G, Nguyen H, O’Hearn KA, Onufriev A, Pan F, Pantano S, Qi R, Ranhamoun A, Roe DR, Roitberg A, Sagui C, SchottVerdugo S, Shen J, Simmerling CL, Skrynnikov NR, Smith J, Swails J, Walker RC, Wang J, Wei H, Wolf RM, Wu X, Xue Y, York DM, Zhao S, Kollman PA (2021) Amber 2021. University of California, San Francisco 15. Wang J, Cieplak P, Kollman PA (2000) How well does a restrained electrostatic potential (RESP) model perform in calculating conformational energies of organic and biological molecules? J Comput Chem 21:1049–1074. https://doi.org/10.1002/1096-987X(200009)21:12%3c1049:: AID-JCC3%3e3.0.CO;2-F 16. Ivani I, Dans PD, Noy A, Pérez A, Faustino I, Hospital A, Walther J, Andrio P, Goñi R, Balaceanu A, Portella G, Battistini F, Gelpí JL, González C, Vendruscolo M, Laughton CA, Harris SA, Case DA, Orozco M (2016) Parmbsc1: a refined force field for DNA simulations. Nat Methods 13:55–58. https://doi.org/10.1038/nmeth.3658 17. Zgarbová M, Šponer J, Otyepka M, Cheatham TEI, Galindo-Murillo R, Jureˇcka P (2015) Refinement of the sugar-phosphate backbone torsion beta for AMBER force fields improves the description of Z- and B-DNA. J Chem Theory Comput 11:5723–5736. https://doi.org/10. 1021/acs.jctc.5b00716 18. Atkins PW, Friedman RS (2011) Molecular quantum mechanics, 5th edn. Oxford University Press, New York 19. Chr M, Plesset MS (1934) Note on an approximation treatment for many-electron systems. Phys Rev 46:618–622. https://doi.org/10.1103/PhysRev.46.618 20. Perdew JP, Burke K, Ernzerhof M (1996) Generalized gradient approximation made simple. Phys Rev Lett 77:3865–3868. https://doi.org/10.1103/PhysRevLett.77.3865 21. Frisch MJ, Trucks GW, Schlegel HB, Scuseria GE, Robb MA, Cheeseman JR, Scalmani G, Barone V, Mennucci B, Petersson GA et al (2009) Gaussian-09 revision D.01. Gaussian Inc, Wallingford, CT, USA

Computational Study of Absorption and Emission of Luteolin Molecule E. Delgado, A. Deriabina, G. D. Vazquez, T. Prutskij, E. Gonzalez, and V. Poltev

Abstract Present in many fruits and vegetables, Luteolin is one of the most common flavonoids in nature. It has multiple bioactivities, such as antioxidant, antiinflammatory, antimicrobial, and anticarcinogenic, bioavailability of which is limited due to its low solubility in water. Computational methods, specifically, TDDFT methods, can be used to evaluate the typical absorption and emission wavelengths that Luteolin display in vacuum and in solutions, allowing to identify the features corresponding to the emission of separate molecules from that of the clusters of molecules. The calculations using M06-2X/6–31++G** methodology predict that in solid state only one peak of emission could be observed near 620 nm. On the other hand, calculations show that in methanol solution two emission peaks, at 380 and 490 nm, can be detected, this is observed since both enol and keto configurations of Luteolin molecule are stable in solvents in their excited state. Keywords Luteolin · Flavonoids · TDDFT · Fluorescence

1 Introduction Luteolin (L) is a part of the group of flavonoids found in fruits, vegetables, and certain beverages. The flavonoid molecules are formed by a benzene ring (A) and a heterocyclic pyran ring (C), joined to a phenyl ring (B) (Fig. 1a) [1]. There are different kinds of flavonoids and L belongs to a group called flavones, which has a double bond between C2 and C3 atoms, and also a carbonyl group at fourth position (Fig. 1b) [2]. E. Delgado · A. Deriabina (B) · G. D. Vazquez · E. Gonzalez · V. Poltev Faculty of Physical and Mathematical Sciences, Autonomous University of Puebla (BUAP), Puebla, Mexico e-mail: [email protected] T. Prutskij Institute of Science, Autonomous University of Puebla (BUAP), Puebla, Mexico © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_28

385

386

E. Delgado et al.

Fig. 1 a Structure of flavonoids, b structure of flavones [2]

(a)

(b)

L is formed by adding four hydroxyl groups to the flavone structure, each of them in positions 5, 7, 3’, and 4’, due to this, L is also known as 3’,4’,5,7-tetrahydroxyflavone. This important flavone has been found in many comestible plants such as peppers (Capsicum annuum) [3, 4], pomegranate (Punica granatum) [5], carrots (Daucus carota) [6, 7], olive oil (Olea europaea) [8], and many others. It is used in the traditional medicine due to its benefits as antioxidant, anti-inflammatory, and anticarcinogenic agent [9]. In previous investigations, J. Cox et al. [10] reported the crystallographic structure of Luteolin hemihydrate where L molecule presents a small torsion angle C3-C2-C1´C6´ of approximately 2º. Additionally, in the L molecule an intramolecular hydrogen bond (O5-H5…O4) is observed. As can be seen in Fig. 2a, in crystal structures, L is found in enol O5 form. It is possible to construct keto O5 form from enol O5 form of L molecule by moving H5 to the O4 position (Fig. 2b). For various flavonoids, the enol forms have been reported to be stable in the ground state, while some keto forms only can be stabilized in the excited state. The proton transition from enol to keto configuration in the excited state is known as Excited State Proton Intramolecular Transfer (ESPIT) [11–13]. Previously, emission and absorption spectra of Apigenin, a natural flavone, and L molecules were obtained by Deriabina et al. [11], they studied these molecules by means of DFT calculations, using TDDFT/B3LYP/6-31+g* [14]. In this work, the functional M06-2X that has shown better results for flavonoids Morin and Quercetin [11, 12] is used.

2 Computational Methodology The Density Functional Theory (DFT) [15] method was used for the optimization of the geometrical structure of the L molecule, and for the evaluation of its vibrational frequencies. For this calculations, the Minnesota functional M06-2X [16] with 6-31++G** basis set [17, 18] was implemented in Gaussian 16 program [19]. The methanol influence in the absorption and FL emission wavelengths was simulated by using the polarizable continuum model (PCM) within the self-consistent reaction field (SCRF) method [20].

Computational Study of Absorption and Emission of Luteolin Molecule

387

Fig. 2 a Molecular structure of enol O5 form of Luteolin reported in crystals [10], b molecular structure of keto O5 form of Luteolin present in the first excited state

(a)

(b)

3 Results 3.1 Enol and Keto Configurations of the Luteolin Molecule Two possible tautomer conformations of L molecule’s configuration, denominated enol O5 and keto O5, are presented in Fig. 2, the keto O5 is formed when the proton H5 from the –OH5 hydroxyl group is transferred toward the O4 atom. Optimized calculations showed that both in vacuum and solvents, only the enol configuration has an energy minimum in the ground state. In the first excited state, only keto configuration is stable in vacuum, while in solvents both keto O5 and enol O5 configurations exist. In Fig. 3, we present the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) for enol O5 configuration in the ground state in vacuum. In Fig. 4, HOMO and LUMO orbitals for keto O5 configuration in the excited state are shown. One can see from Fig. 3 that for the A-ring and B-ring of the L molecule, the HOMO has π character, while LUMO has π*. The electron density of the C2-C1´

388

E. Delgado et al.

Fig. 3 Electron population of the a HOMO and b LUMO orbitals of the enol configuration of L molecule in the ground state in vacuum

(a)

(b) bond, on the contrary, change its π* character to the π, thus more planar position for the L molecule is expected in the first excited state. The electron density of the carboxyl oxygen atom O4 increases in LUMO orbitals, while for hydroxyl O5 it decreases. This behavior indicates that the H5 proton tends to be transferred from O5 toward O4 atom in the excited state. For the keto O5 configuration (Fig. 4), the electron density of HOMO is concentrated in the A-ring area showing π characteristics, while in LUMO the distribution of electron density is more uniform, and it is typical of a π* orbital.

3.2 Absorption and Emission of the Enol O5 Configuration of the Luteolin Molecule In Table 1, the main characteristics of the enol O5 configuration of the L molecule in the ground state and in the first excited state are presented. In the ground state,

Computational Study of Absorption and Emission of Luteolin Molecule

389

Fig. 4 Electron population of the a HOMO and b LUMO orbitals of the keto O5 configuration of L molecule in the first excited state in vacuum

(a)

(b) the dipole moment of the enol configuration of L molecule changes significantly (from 5.5 to 7.5 D) when the molecule is considered to be within the solvent. In addition, there is a small variation of the β torsion angle of the B-ring close to 2°. The characteristic wavelength of absorption increases in methanol in approximately 10 nm. It is important to notice that in the process of geometric optimization of the first excited state of the enol configuration of L molecule in vacuum, the proton H5 is transferred within the L molecule, and it moves from O5 to O4 position, which is consistent with the molecular orbitals analysis made in Sect. 3.1. On the other hand, the minimum of energy for enol configuration in methanol exists and its characteristics are presented in the right column of Table 1. It can be observed that the O5-H5 distance slightly increases, while the O4…O5 distance decreases, compared with the distances displayed in the ground state. A noticeable change happens in the C3-C2-C1´-C6´ torsion angle: the enol configuration in the first excited state is practically planar. We can also observe an increment in the dipole moment from 7.5 D, in the ground state, to 10.3 D in the excited state. Finally, the

390

E. Delgado et al.

Table 1 Main characteristics obtained for the optimized geometry of enol O5 configuration of Luteolin molecule in the ground state and in the first excited state in vacuum and in methanol using M06-2X/6–31++G** Ground state Vacuum

First excited state Methanol

Vacuum Methanol

Ground state energy, E0 (a.u.)

−1028.6216 −1028.6458 –

−1028.6340

First excited state energy, E1 (a.u.)

−1028.4655 −1028.4955 –

−1028.5142

Zero-point corrected energy, ZE (a.u.)

−1028.3938 −1028.4183 –

−1028.2918

Free energy,  G (a.u.)

−1028.4385 −1028.4629 –

−1028.3375

First frequency, F (Hz)

28

27



21

Distance O5-H5, ROH (Å)

0.99

0.99



1.05

Distance O4…O5, ROO (Å)

2.61

2.60



2.45

Angle O5-H5…O4, α OHO (º)

147.64

149.03



156.98

Torsion angle C3-C2-C1´-C6´, β (°)

22.96

20.54



0.01

Dipole moment, P (Debye)

5.46

7.52



10.31

Absorption | Emission,  E (eV)

4.25

4.09



3.26

Characteristic wavelength of absorption, 291.84 λabs (nm)

303.04





Oscillator strength of absorption, fab

0.3954

0.6991





Characteristic wavelength of emission, λem (nm)







380.49

Oscillator strength of emission, fem







1.0

characteristic wavelength of the emission is 380 nm, thus the Stocks shift for enol configuration in methanol is about 77 nm.

3.3 Emission of the Keto Configuration of the Luteolin Molecule For the keto configuration of L molecule, the main characteristics in the first excited state in vacuum and in methanol are presented in Table 2. Similar to the enol configuration, in the excited state, keto configuration is practically planar. The dipole moment is significantly greater in methanol than in vacuum. The characteristic wavelength of the emission is of 623 nm for vacuum and 487 nm for methanol. Note that the difference from the enol configuration is more than 100 nm.

Computational Study of Absorption and Emission of Luteolin Molecule

391

Table 2 Main characteristics obtained for the optimized geometry of the keto O5 configuration of Luteolin molecule in the first excited state in vacuum and in methanol using M06-2X/6 31++G** Vacuum

Methanol

Ground state energy, E0 (a.u.)

−1028.5813

−1028.6199

First excited state energy, E1 (a.u.)

−1028.5081

−1028.5264

Zero-point corrected energy, ZE (a.u.)

−1028.2838

−1028.3028

Free energy,  G (a.u.)

−1028.3297

−1028.3490

First Frequency, F (Hz)

24

22

Distance O4-H5, ROH (Å)

0.98

0.98

Distance O4…O5, ROO (Å)

2.68

2.65

Angle O5-H5…O4, AOHO (º)

146.00

146.44

Torsion angle C3-C2-C1´-C6´, β (°)

0.06

0.03

Dipole moment, P (Debye)

9.77

12.03

Emission,  E (eV)

1.99

2.54

Characteristic wavelength of emission, λem (nm)

623.18

487.40

Oscillator strength of emission, fem

0.04

0.30

4 Conclusions The calculations preformed using M06-2X/6–31++G** methodology showed that in the ground state the enol configuration of Luteolin molecule has the energy minimum in vacuum and in methanol. Characteristic absorption wavelengths obtained for the enol configuration are about 290 nm in vacuum and 300 nm in methanol. In the excited state in vacuum only keto configuration is stable, its characteristic wavelength of emission is near 620 nm. In methanol, both the enol and keto configurations of the Luteolin molecule are stable in excited state, and two peaks could be observed, at 380 nm and 490 nm, respectively. Acknowledgements The authors thankfully acknowledge the computer resources, technical expertise, and support provided by the Laboratorio Nacional de Supercómputo del Sureste de México, CONACYT member of the network of national laboratories.

References 1. Kumar S, Pandey AK (2013) Chemistry and biological activities of flavonoids: an overview. Sci World J 2013:162750. https://doi.org/10.1155/2013/162750 2. Panche AN, Diwan AD, Chandra SR (2016) Flavonoids: an overview. J Nutr Sci 5:e47. https:/ /doi.org/10.1017/jns.2016.41 3. Lopez-Lazaro M (2009) Distribution and biological activities of the flavonoid luteolin. MiniRev Med Chem 9:31–59. https://doi.org/10.2174/138955709787001712

392

E. Delgado et al.

4. Materska M, Piacente S, Stochmal A et al (2003) Isolation and structure elucidation of flavonoid and phenolic acid glycosides from pericarp of hot pepper fruit Capsicum annuum L. Phytochemistry 63:893–898. https://doi.org/10.1016/S0031-9422(03)00282-6 5. van Elswijk DA, Schobel UP, Lansky EP et al (2004) Rapid dereplication of estrogenic compounds in pomegranate (Punica granatum) using on-line biochemical detection coupled to mass spectrometry. Phytochemistry 65:233–241. https://doi.org/10.1016/j.phytochem.2003. 07.001 6. Shalaby NMM, Maghraby AS, El-Hagrassy AM (1999) Effect of Daucus carota var.boissieri extracts on immune response of Schistosoma mansoni infected mice. Folia Microbiol (Praha) 44:441–448. https://doi.org/10.1007/BF02903720 7. Kumarasamy Y, Nahar L, Byres M et al (2005) The assessment of biological activities associated with the major constituents of the methanol extract of ‘Wild Carrot’ (Daucus carotaL.) seeds. J Herb Pharmacother 5:61–72. https://doi.org/10.1080/J157v05n01_07 8. Pieroni A, Heimler D, Pieters L et al (1996) In vitro anti-complementary activity of flavonoids from olive (Olea europaea L.) leaves. Pharmazie 51:765–768 9. Esmaeili A, Mousavi Z, Shokrollahi M, Shafaghat A (2013) Antioxidant activity and isolation of luteoline from Centaurea behen L. Grown in Iran. J Chem 2013:1–5. https://doi.org/10. 1155/2013/620305 10. Cox PJ, Kumarasamy Y, Nahar L et al (2003) Luteolin. Acta Crystallogr Sect E Struct Rep Online 59:o975–o977. https://doi.org/10.1107/S1600536803013229 11. Deriabina A, Prutskij T, Castillo Trejo L et al (2022) Experimental and theoretical study of fluorescent properties of morin. Molecules 27:4965. https://doi.org/10.3390/molecules271 54965 12. Prutskij T, Deriabina A, Melendez FJ et al (2021) Concentration-dependent fluorescence emission of quercetin. Chemosensors 9:315. https://doi.org/10.3390/chemosensors9110315 13. Yang Y, Zhao J, Li Y (2016) Theoretical study of the ESIPT process for a new natural product quercetin. Sci Rep 6:32152. https://doi.org/10.1038/srep32152 14. Amat A, Clementi C, De Angelis F et al (2009) Absorption and emission of the apigenin and luteolin flavonoids: a TDDFT investigation. J Phys Chem A 113:15118–15126. https://doi.org/ 10.1021/jp9052538 15. Parr RG (1980) Density functional theory of atoms and molecules. In: Fukui K, Pullman B (eds) Horizons of quantum chemistry. Springer, Netherlands, Dordrecht, pp 5–15 16. Zhao Y, Truhlar DG (2008) The M06 suite of density functionals for main group thermochemistry, thermochemical kinetics, noncovalent interactions, excited states, and transition elements: two new functionals and systematic testing of four M06-class functionals and 12 other functionals. Theor Chem Acc 120:215–241. https://doi.org/10.1007/s00214-007-0310-x 17. Ditchfield R, Hehre WJ, Pople JA (1971) Self-consistent molecular-orbital methods. IX. An extended gaussian-type basis for molecular-orbital studies of organic molecules. J Chem Phys 54:724–728. https://doi.org/10.1063/1.1674902 18. Frisch MJ, Pople JA, Binkley JS (1984) Self-consistent molecular orbital methods 25. Supplementary functions for Gaussian basis sets. J Chem Phys 80:3265–3269. https://doi.org/10.1063/ 1.447079 19. Frisch MJ, Trucks GW, Schlegel HB et al (2016) Gaussian˜16 Revision C.01 20. Tomasi J, Mennucci B, Cammi R (2005) Quantum mechanical continuum solvation models. Chem Rev 105:2999–3094. https://doi.org/10.1021/cr9904009

Efficiency of Molecular Mechanics as a Tool to Understand the Structural Diversity of Watson–Crick Duplexes Andrea Ruiz, Alexandra Deriabina, Eduardo Gonzalez, and Valeri Poltev

Abstract To evaluate an ability of molecular mechanics method to reproduce experimental data and results of quantum mechanics studies for simple fragments of DNA, extensive computations have been performed using popular AMBER force fields. The systems considered in this work are deoxydinucleoside monophosphates (dDMPs) as minimal fragments of single chain, the fragments of sugar-phosphate backbone corresponding to dDMPs, and complementary dDMPs (cdDMPs) as minimal fragments of DNA duplex. Various nucleotide sequences for selected conformational classes have been considered. The majority of systems optimized by molecular mechanics method resemble both experimental data and the regularities obtained in quantum mechanics computations. Nevertheless, some AMBER-optimized conformations differ substantially from experimental data and from the quantum mechanics results. The deviations result in H-bond formations between neighbor nucleosides, in distortion of planarity of base pairs for cdDMPs and increase of intra-chain base stacking for dDMPs. Analysis of these deviations suggests that the force fields used present insufficient accuracy in the description of H-bonding and base stacking interactions. Keywords DNA conformations · Molecular mechanics · DNA minimal fragments

1 Introduction DNA, the biomolecule carrying hereditary material, has been extensively studied since the discovery of its structure by Franklin, Watson, and Crick [1]. The complementary double helix consists of two antiparallel polynucleotide chains that contain rigid units in the form of two groups of bases: purines and pyrimidines, and flexible units represented by the sugar-phosphate backbone (SPB) [2]. Although a lot of studies on nucleic acid structure and properties have been performed since its A. Ruiz (B) · A. Deriabina · E. Gonzalez · V. Poltev Faculty of Physical and Mathematical Sciences, Autonomous University of Puebla (BUAP), 72570 Puebla, Mexico e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_29

393

394

A. Ruiz et al.

discovery, it is still not clear the role of DNA subunits in the formation of its various conformational states. In the last decades, the structural analysis of this biomolecule has generated numerous experimental and computational results. In Nucleic Acid Data Bank NDB [3] and Protein Data Bank PDB [4], experimental data obtained from studies carried out with X-ray diffraction techniques and nuclear magnetic resonance methods in solution are stored. Based on this data, thousands of experimental dinucleotides steps were analyzed by Schneider et al. [5–8], as the result till now 96 dinucleotides conformers of RNA and DNA were designated as NtC and classified. The NtCs are grouped in 15 families of the conformational alphabet of nucleic acids (CANA). The CANA codes of DNA dinucleotides can correspond to B-like, A-like, or mixed B-A and A-B forms. Each NtC is defined by 12 geometric parameters, 7 of them are related to sugar-phosphate backbone torsions, 2 others—to glycosidic torsions, and the last 3—to distances and pseudo-torsion angle [8]. For the study of the fragments of Watson and Crick duplex (WCD), it is sufficient to consider nine geometric parameters: seven torsion angles δ1, ε, ζ, α, β, γ, δ2 and the two glycosidic angles χ1, χ2. Figure 1 shows designation for these torsion angles. The variety of possible B form of DNA can be represented in five groups: BBB, B12, miB, BBw, and BB2. In this work, two of these groups were selected for the detailed study by MM methods, BBB, and miB. The first one contains two conformational classes, NtCs, of BI: BB00 and BB01 (less populated). The second family miB shares typical BBB features but presents changes in some values of torsion angles of the SPB. In particular, the NtCs BB12 and BB13 were studied. Also, NtC BA09 of the B-A-type CANA group with the first nucleoside of BI family Fig. 1 Torsion angles designation for dDMP. H5T and H3T are hydrogen atoms in 5’ and 3’ ends of the chain, respectively

Efficiency of Molecular Mechanics as a Tool to Understand …

395

Table 1 The values of the torsional angles of NtCs selected [8] (in °) NtC

δ1



ζ

α

β

BB00

138

183

258

304

180

γ

δ2

χ1

χ2

44

138

253

258

BB01

131

181

266

301

BB12

140

196

280

257

176

49

120

248

244

76

171

140

269

BB13

143

187

293

205

219

98

161

146

253

219

BA09

134

200

287

256

68

172

90

265

186

and the second one in A-like form was considered. The values of the torsion angles of selected NtCs are shown in Table 1. The theoretical computational methods enable the study of the contributions to the DNA conformations of its subunits through Molecular Mechanics (MM) and Quantum Mechanics (QM) approaches. MM and QM computations carried out in our group (Poltev et al.) [9–12] determined that the minimal fragment that provides structural information for the WCDs in the BI, BII, AI, and AII NtCs is the dDMP. Additionally, it was shown that three-dimensional structure of WCD depends on the directionality of the SPB, on the preferable torsion angles, and on the sequence-dependent regularity in adjacent base for Pur-Pur (purine-purine) and Pur-Pyr (purine-pyrimidine) sequences, and minor overlap for Pyr-Pyr and Pyr-Pur ones. These conclusions also hold for minimal fragments of duplex, cdDMP. Recently, our group published a series of papers [13, 14] uncovering the regularities of 3D structure formations for different from those previously studied NtCs (BI, BII, AI, and AII). Computations were made to minimal fragments of DNA by both QM and MM methods. The new findings show that two types of NtCs conformations compatible with WCD exist: those with torsion angles close to the energy minima of a separate SPB fragments, and those having rather different torsions values. In the first group, the base-overlap regularities are the same as the mentioned above; however, NtCs of the second type do not necessarily follow these rules, in this group BII is now positioned. The aim of this work is to revise the viability of MM method to reproduce regularities of a local 3D DNA structure for selected NtCs, using two different AMBER force fields.

2 Methods 2.1 Method of Molecular Mechanics The study of the spatial configuration and atomic interactions of biomolecules can be achieved by classical and quantum simulation methods. Given the close relationship that exists between the size of the system and the computational cost, MM allows expanding the analysis of simple DNA fragments. For MM, the classic description

396

A. Ruiz et al.

of the atoms of the molecular system, based on the Born–Oppenheimer approximation, is accustomed. The simplest potential in MM is represented by a sum of four terms (Eq. 1). First three terms correspond to intramolecular interactions: related to bond lengths (Eb ), bond angles (Ea ), and torsion angles (Et ). The last term describes the non-bonded pair-wise interactions (Einter ), such as Coulomb and van der Waals potentials [15]. Etotal (r) =



Eb +



Ea +



Et +



Einter

(1)

The mathematical expressions for energy and its components together with a set of parameters are known as the Force Field (FF) which is used in the search for local minima. One of the most used FF for biological systems is the AMBER force field, originally proposed by Weiner and Kollman [16]. Over the years, improvements have been made to the FF that describe DNA, such as the BSC1 [17] and OL15 [18] FF used in this work. The differences between these FF are mainly related to parameters for the description of torsion angles χ, ε/ζ and β.

2.2 Preparation and Characterization of DNA Minimal Fragments Initial structures of the fragments were extracted from NDB [3] and adjusted. The phosphate group was neutralized by a Na+ ion for each dDMP, cdDMP, and SPB fragment. Phosphate groups at the 5’ and 3’ ends of the continuous chain were replaced by hydrogen atoms (Fig. 1). To construct SPB fragments, the bases of the dDMPs were replaced by hydrogen atoms. Two versions of the AMBER force field were used to search for minimum energy structures: BSC1 [17] and OL15 [18], by means of AMBER 18 software [19]. The conformational characteristics of the DNA fragments obtained were calculated using 3DNA [20] and DNATCO [21] software. The images were constructed by means of the UCSF CHIMERA program [22].

3 Results 3.1 Conformational Classes of DNA with Similar Characteristics Considering the WCDs, we selected two pairs of NtCs with similar values of torsion angles corresponding to the B form. The selected NtCs comprised of the canonical BI form (BB00 and BB01), which differ by less than 20° of their torsion angles, and

Efficiency of Molecular Mechanics as a Tool to Understand …

397

two classes BB12 and BB13 differing by more than 30° in α torsion angle. One more NtC class considered is BA09 differing from BB12 by δ2 torsion. The values of the torsion angles for these NtCs are presented in Table 1. Optimizations with BSC1 AMBER force fields have been performed for the dDMPs and cdDMPs, comprising all possible base sequences for BB00 and BB01 NtCs. It appears that when both chains of cdDMPs belong to one of the mentioned NtCs, the energy minima for each optimized sequence are the same for BB00:BB00, BB00:BB01, and BB01:BB01 NtCs combinations. However, this force field is not capable of reproducing the characteristics of the DNA double helix when Guanine (Gua) is present in the 5’-nucleoside. In this case, hydrogen bonds are formed between the amino group of Gua and the deoxyribose oxygen of other nucleoside, as displayed in Fig. 2b. The above results can be extended also to dDMPs (BB00 and BB01), all optimized sequences without Gua in 5’-nucleoside correspond to BB00 NtC (Fig. 2a), except for the dCpdC sequence displaying a configuration of perpendicular bases showed in Fig. 3c. For purine-purine sequence of BB00 NtC and a pyrimidinepyrimidine sequence in BB07 NtC of complementary chain, the regularities reported for the BI DNA family are not reproduced. Geometry optimization of dGpdG:dCpdC fragment does not preserve the rules of overlap, since it has a substantial overlap of two purines with respect to the complementary chain, an example is a cdDMPs with two cytosine (Cyt) bases (with stacking overlap of 1.72Å2 ) and two Gua bases (0.12 Å2 ); artificial H-bond is not present. Geometry optimizations with the BSC1 force field of separate SPBs of all possible 16 dDMPs of the BB00 result in two minimum energy structures that differ by

Fig. 2 Two cdDMPs conformations optimized with BSC1 force field. Cytosine in yellow, thymine in blue, adenine in red, guanine in green. Both chains correspond to BB00 NtC. a Sequence dCpdT:dApdG reproduces the characteristics of the DNA double helix. b Sequence dTpdC:dGpdA, guanine of 5’-nucleoside forms artificial H-bond with a distance of 2.15 Å

Fig. 3 Conformations dDMPs optimized with BSC1 force field. Chains correspond to BB00 NtC. a Sequence dApdG. b Sequence dGpdA, guanine in 5’-nucleoside position forms H-bond. c Sequence dCpdC, perpendicular bases (T-shaped)

398

A. Ruiz et al.

about 1 kcal/mol in energy values, and by no more than 10° in each torsion angle. Optimized SPB fragments of BB01 dDMPs correspond to the less negative energy minima of BB00 (Table 2). The OL15 force field produces only one energy minimum for optimized SPB fragments belonging to both BB00 and BB01 NtCs. A similar behavior follows from the geometry optimization of the separate SPB corresponding to the BB12 and BB13 NtCs. These optimizations produce two energy minima, those having torsion angles close to BB12 NtC (two structures obtained with BSC1 force field and one structure with OL15), and the minima where δ’s torsions are considerably different from the values of torsions angles of BB12 and BB13 classes, with δ1 (105°) and δ2 (109°) values A-DNA like (Table 2). The experimental and MM-optimized conformations of SPB corresponding to BB01 and BB13 are displayed in Fig. 4. For the next series of computations Pur-Pyr (A, G–C, T) and Pyr-Pur (C, TA, G) sequences pertaining to BB12 and BB13 NtCs were selected. Analysis of conformational characteristics of the energy minima of these structures demonstrates that torsion angles of dDMPs belonging to a single-strand or a double-stranded fragment correspond to a BB12 NtC. This fact extends to cdDMPs, regardless to their complementary dDMPs NtC; however, some of these complementary dDMPS do not preserve three-dimensional DNA characteristics, for example, dApdC:dGpdT (NDB ID PD0514) where dGpdT sequence is part of a BB00 conformation that produces an incorrect H-bond. Table 2 Torsional angles of SPB for conformational classes corresponding to energy minima computed with BSC1 and OL15 force field. Angles in [°]. Characteristic values of the torsional angles of the classes NtCs [8] in [°] NtC

Force field

δ1



ζ

α

β

γ

δ2

BB00

138

183

258

304

180

44

138

BSC1(1)

134

192

283

290

169

47

138

BSC1(2)

143

194

282

291

171

49

138

OL15

116

181

276

291

156

49

126

131

181

266

301

176

49

120

BSC1(1)

134

192

283

290

169

47

138

OL15

116

181

276

291

156

49

126

140

196

280

257

76

171

140

BSC1(1)

131

194

277

282

74

186

137

BSC1(2)

140

198

276

281

73

186

138

OL15(1)

105

179

270

292

90

182

109

OL15(2)

127

184

269

281

80

182

134

143

187

293

219

98

161

146

BSC1(1)

131

194

277

282

74

186

137

BSC1(2)

140

198

276

281

73

186

138

OL15(1)

105

179

270

292

90

182

109

BB01

BB12

BB13

Efficiency of Molecular Mechanics as a Tool to Understand …

399

Fig. 4 SPB conformations of BB01 NtC: a Experimental fragment. b SPB optimized with BSC1 force field. c SPB optimized with OL15 force field. SPB conformations of BB13 NtC: d Experimental fragment. e SPB optimized with BSC1 force field. f SPB optimized with OL15 force field

As mentioned, the geometric optimizations using BSC1 force field for dDMPs of BB12 and BB13 NtCs give rise to structures with BB12 characteristics. When Gua is present in the 5’ nucleoside, distance between the hydrogen atom of the amino group of Gua and the oxygen atom of Cyt or Thy is about 2.4–2.7 Å. Figure 5 exhibits these distances for an experimental and optimized fragment along with the base-overlap projections. For all sequences, an overestimation of ring overlap occurs with respect to the experimental data for duplex fragment due to the method used.

400

A. Ruiz et al.

Fig. 5 Experimental a and BSC1-optimized b dGpdT conformations of BB13 NtC class. Distances between hydrogen of Gua amino group and O2 of Thy are marked by dot lines (left). Overlap projections (right)

3.2 BA09 Conformational Class of DNA Another conformation class analyzed is BA09 NtC, which contains the first nucleoside with B-like sugar and second nucleoside with A-like one. Pur-Pyr and PyrPur sequences of dDMPs and cdDMPs were selected. BSC1 force field produces minimum energy structures that preserve BA09 NtC torsion angles and base superposition regularities of experimental data. Table 3 presents results obtained for four dDMPs and four cdDMPs that meet the regularities for classical BI family, namely, the torsion angles of the sugar-phosphate backbone, two glycosidic torsions, and the regularities of base overlapping area. Geometry optimizations of dDMPs and cdDMPs with a Pur-Pyr sequence demonstrate a higher base overlap with respect to the corresponding DNA fragments. When the dDMPs contain Guanine, distances of 2.0–2.7 Å arise between hydrogen of the Cyt amino group and the oxygen of the adjacent base Gua (Fig. 6). The optimized SPB structures produce energy minima already mentioned for the BB00, B001, BB12 and BB13 NtCs, i.e., energy minima having all the torsional angles nearby to the corresponding NtCs of the experimental data. In this case two minima are generated using BSC1 force field. Nonetheless, it happens that when OL15 force field is used, the SPB torsion angles values differ from the characteristic values of BA09; particularly at the δ1 and α torsions, as shown in the following Table 4.

Efficiency of Molecular Mechanics as a Tool to Understand …

401

Table 3 Experimental and BSC1-optimized characteristics of selected dDMPs pertaining to BA09 conformation class. Values of the torsion angles of the sugar-phosphate backbone, the two glycosidic torsions, and the base overlapping area of the dDMPs (St1) in BA09 conformation and the complementary dDMPs (St2) δ1



ζ

α

β

γ

δ2

χ1

χ2 186

St1

St2

BA09

134

200

287

256

68

172

90

265

dApdC BA09 1t9i

Exp

129

200

286

259

57

170

92

266

191

4.08

1.45

cdDMP

138

201

288

260

56

165

84

263

191

4.28

1.26

dDMP

139

202

288

261

56

166

83

262

191

4.01

dGpdC BA09 2hoi

Exp

131

204

280

257

80

167

95

278

188

4.07

1.92

cdDMP

139

200

288

262

57

172

80

266

184

3.03

2.84

dDMP

139

202

287

261

56

168

81

267

194

4.09

dCpdA BA09 5hr9

Exp

147

205

288

250

78

169

97

276

168

0.00

0.15

cdDMP

139

209

283

267

60

178

143

245

195

0.62

0.02

dDMP

137

208

293

256

57

164

83

242

192

2.03

dCpdG BA09 4qlc

Exp

136

214

279

260

65

154

91

257

176

0.11

0.00

cdDMP

140

205

292

259

55

166

80

254

184

0.64

0.00

dDMP

142

214

290

260

60

171

78

240

195

0.72

Fig. 6 Sequence dApdT (top) and sequence dGpdC (bottom) Experimental (a and c) and BSC1optimized (b and d) dDMPs conformation of BA09 NtC class. Distance between hydrogen of the Cyt amino group and the oxygen of Gua is shown

402

A. Ruiz et al.

Table 4 Torsional angles of SPB optimized for conformational classes corresponding to energy minima computed with BSC1 and OL15 force field. Angles in [°] NtC

Force field

δ1



ζ

α

β

γ

δ2

BA09

134

200

287

256

68

172

90

BSC1(1)

132

196

277

281

74

183

78

BSC1(2)

138

200

279

278

71

184

78

OL15(1)

105

179

270

292

90

182

109

OL15(2)

74

154

278

289

75

176

99

4 Discussion This work is a continuation of previous publications of our group, in which the contribution of DNA subunits to its three-dimensional structure is studied for different conformational classes of simple DNA fragments. Using the MM method, an attempt has been made to prove the feasibility of MM method to reproduce the regularities of superposition of bases observed in experimental data and in the results obtained with QM. The results confirmed the utility of the MM method (using the BSC1 and OL15 AMBER force fields), which produce structural characteristics for separate SPB fragments similar to those obtained with QM methods. The conformations of BB01 and BB13 dDMPs have energy minimum structures close to those of BB00 and BB12 NtCs, respectively. The above conclusion was extended to optimized cdDMPs, torsion angles of dDMPs and cdDMPs keep the features revealed in geometry optimization of SPB. Nevertheless, when Guanine is present in the 5’-nucleoside artificial hydrogen bond is formed, as a result that will be studied in more detail later. In most cases, MM methods reproduce regularities revealed from experimental data and the results are obtained with QM. However, in dDMPs and cdDMPs the overlap area is overestimated, and in some cases artificial hydrogen bonds that were not observed in experimental structures are formed. This suggests that, by having the participation of the bases, there are greater deviations, which are related to the FF used. MM is useful method for calculation of such structures and can be considered as preliminary step to the study with QM methods. Acknowledgements The authors gratefully acknowledge the Laboratorio Nacional de Supercomputo del Sureste de Mexico (LNS), a member of the CONACYT national laboratories, for computer resources, technical advice, and support.

References 1. Watson JD, Crick FH (1953) Molecular structure of nucleic acids: a structure for deoxyribose nucleic acid. Nature 171:737–738. https://doi.org/10.1038/171737a0

Efficiency of Molecular Mechanics as a Tool to Understand …

403

2. Kuriyan J, Konforti B, Wemmer D (2012) The molecules of life: physical and chemical principles. W.W. Norton & Company, New York, pp 15–25. https://doi.org/10.1201/978042925 8787 3. Coimbatore Narayanan B, Westbrook J, Ghosh S, Petrov AI, Sweeney B, Zirbel LNB, Berman HM (2014) The nucleic acid database: new features and capabilities. Nucleic Acids Res 42:D114–D122. https://doi.org/10.1093/nar/gkt980 4. Berman HM, Westbrook J, Feng Z, Gilliland G, Bhat TN, Weissig H, Shindyalov IN, Bourne PE (2000) The protein data bank. Nucl Acids Res 28:235–242. https://doi.org/10.1093/nar/28. 1.235 5. Svozil D, Kalina J, Omelka M, Schneider B (2008) DNA conformations and their sequence preferences. Nucl Acids Res 36:3690–3706. https://doi.org/10.1093/nar/gkn260 ˇ ˇ 6. Cech P, Kukal J, Cerný J, Schneider B, Svozil D (2013) Automatic workflow for the classification of local DNA conformations. BMC Bioinformatics, BMC Bioinformatics 14. https://doi. org/10.1186/1471-2105-14-205 ˇ ˇ 7. Schneider B, Božíková P, Neˇcasová I, Cech P, Svozil D, Cerný J (2018) A DNA structural alphabet provides new insight into DNA flexibility. Acta Crystallogr Sect Struct Biol 74:52–64. https://doi.org/10.1107/S2059798318000050 ˇ 8. Cerný J, Božíková P, Svoboda J, Schneider B (2020) A Unified dinucleotide alphabet describing both RNA and DNA structures. Nucl Acids Res 48:6367–6381. https://doi.org/10.1107/S20 59798320009389 9. Poltev VI, Anisimov VM, Danilov VI, Deriabina A, Gonzalez E, Jurkiewicz A, Le´s A, Polteva N (2008) DFT study of B-like conformations of deoxydinucleoside monophosphates containing Gua and/or Cyt and their complexes with Na+ cation. J Biomol Struct Dyn 25:563–571. https:/ /doi.org/10.1080/07391102.2008.10507203 10. Poltev VI, Anisimov VM, Danilov VI, García D, Deriabina A, González E, Salazar R, Rivas F, Polteva N (2011) DFT study of DNA sequence dependence at the level of dinucleoside monophosphates. Comput Theor Chem 975:69–75. https://doi.org/10.1016/j.comptc. 2011.03.049 11. Poltev VI, Anisimov VM, Danilov VI, García D, Sánchez C, Deriabina A, González E, Rivas F, Polteva N (2014) The role of molecular structure of sugar-phosphate backbone and nucleic acid bases in the formation of single-stranded and double-stranded DNA structures. Biopolymers 101:640–650. https://doi.org/10.1002/bip.22432 12. Poltev VI, Anisimov VM, Sanchez C, Deriabina A, González E, Garcia D, Rivas F, Polteva N (2016) Analysis of the conformational feautures of Watson-Crick duplex fragments by molecular mechanics and quantum mechanics methods. Biophysics 61:217–226. https://doi.org/10. 1134/S0006350916020160 13. Poltev VI, Anisimov VM, Dominguez V, Deriabina A, Gonzalez E, Garcia D, Vázquez-Báez V, Rivas F (2020) Current problems in computer simulation of variability of three-dimensional structure of DNA. In: Mammino L, Ceresoli D, Maruani J, Brändas E (eds) Proceedings of the advances in quantum systems in chemistry, physics, and biology, Kruger Park, South Africa, 23–29 Sept 2018. Springer, Cham, Germany, pp 233–253. https://doi.org/10.1007/978-3-03034941-7_12 14. Poltev VI, Anisimov VM, Dom´ınguez V, Ruiz A, Deriabina A, González E, Garcia D, VázquezBáez V, Rivas F (2021) Understanding the origin of structural diversity of DNA double helix. Computation, 9:98. https://doi.org/10.3390/computation9090098 15. Poltev V (2017) Molecular mechanics: principles, history, and current status. In: Leszczynski J et al (eds) Handbook of computational chemistry, pp 21–67. https://doi.org/10.1007/978-3319-27282-5_9 16. Cornell WD, Cieplak P, Bayly CI, Gould IR, Merz KM, Ferguson DM, Spellmeyer DC, Fox T, Caldwell JW, Kollman PA (1995) A second generation force field for the simulation of proteins, nucleic acids, and organic molecules. J Am Chem Soc 117:5179–519710. https://doi.org/10. 1021/ja00124a002 17. Ivani, I, Dans PD, Noy A, Pérez A, Faustino I, Hospital A, Walther J, Andrio P, Goñi R, Balaceanu A et al (2016) Parmbsc1: a refined force field for DNA simulations. Nat Methods 13:55–58. https://doi.org/10.1038/nmeth.3658

404

A. Ruiz et al.

18. Zgarbová M, Šponer J, Otyepka M, Cheatham TE, Galindo-Murillo R, Jureˇcka P (2015) Refinement of the sugar–phosphate backbone torsion beta for AMBER force fields improves the description of Z- and B-DNA. J Chem Theory Comput 11:5723–5736. https://doi.org/10.1021/ acs.jctc.5b00716 19. Case DA, Ben-Shalom IY, Brozell SR, Cerutti DS, Cheatham TE III, Cruzeiro VWD, Darden TA, Duke RE, Ghoreishi D, Gilson MK, Gohlke H, Goetz AW, Greene D, Harris R, Homeyer N, Huang Y, Izadi S, Kovalenko A, Kurtzman T, Lee TS, LeGrand S, Li P, Lin C, Liu J, Luchko T, Luo R, Mermelstein DJ, Merz KM, Miao Y, Monard G, Nguyen C, Nguyen H, Omelyan I, Onufriev A, Pan F, Qi R, Roe DR, Roitberg A, Sagui C, Schott-Verdugo S, Shen J, Simmerling CL, Smith J, SalomonFerrer R, Swails J, Walker RC, Wang J, Wei H, Wolf RM, Wu X, Xiao L, York DM, Kollman PA (2018) AMBER 2018. University of California, San Fransisco 20. Lu X, Olson WK (2003) 3DNA: a software package for the analysis, rebuilding and visualization of three-dimensional nucleic acid structures. Nucl Acids Res 31:5108–5121. https://doi. org/10.1093/nar/gkg680 ˇ 21. Cerný J, Božíková P, Malý M, Tykaˇc M, Biedermannová L, Schneider B (2020) Structural alphabets for conformational analysis of nucleic acids available at dnatco.datmos.org. Acta Crystallogr Sect Struct Biol 76:805–813. https://doi.org/10.1107/S2059798320009389 22. Pettersen EF, Goddard TD, Huang CC, Couch GS, Greenblatt DM, Meng EC, Ferrin TE (2004) UCSF Chimera–a visualization system for exploratory research and analysis. J Comput Chem 25:1605–1612. https://doi.org/10.1002/jcc.20084

Conformational Changes of Drew–Dickerson Dodecamer in the Presence of Caffeine César Morgado, Alexandra Deriabina, Eduardo Gonzalez, and Valeri Poltev

Abstract Caffeine has always aroused great interest, many of its physiological effects have been known since ancient times, and nowadays it has become a substance of interest due to the evidence that has been found regarding its interaction with oncological drugs and the DNA chains. To elucidate the interaction of caffeine with DNA, a molecular mechanics study was performed with the Barcelona Supercomputing Center 1 force field with a fixed number of atoms, a fixed volume, and a fixed temperature also called NVT ensemble. A molecular docking was done using AutoDock 4.2, and then the configurations with the lowest interaction energy were subjected to a molecular dynamic simulation with the software AMBER19. It was found that the changes generated by the interaction of Caffeine in the minor grove of the Drew–Dickerson dodecamer in the ninth and tenth nucleotides (cytokine and guanine) in the torsion angles are an increase of 9.8° in angle α, a decrease of 18.7° in angle β, an increase of 12.2° in angle γ, a decrease of 26.5° in angle δ, an increase of 29.7° in angle ε, a slight decrease of 2.9° in the angle ζ, and a decrease of 30° in the angle χ. Keywords Molecular dynamics · DNA simulation · Caffeine

1 Introduction Caffeine is a compound that belongs to the group of xanthines and is found naturally in daily consumption products such as coffee, tea, and cocoa [1], as well as in energy beverages and soft drinks [2]. Due to its structure, caffeine can interact with nucleic acids causing the suppression of cell proliferation that can be linked to a decrease in carcinogenesis in vivo [3]; however, the inhibition of various antitumor agents in the presence of caffeine has C. Morgado (B) · A. Deriabina · E. Gonzalez · V. Poltev Faculty of Physical and Mathematical Sciences, Autonomous University of Puebla (BUAP), 72570 Puebla, Mexico e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_30

405

406

C. Morgado et al.

also been observed [4]. Direct effects on DNA that have been observed are inhibition of damage-activated checkpoints [5], and reduced transduction of HIV-1 in human cells [6]. In the field of computational chemistry, several studies have been conducted with various DNA–ligand complexes [7, 8], and even though there is experimental evidence that caffeine and its derivatives can intercalate in DNA [9], there have been few computational simulation studies of DNA–caffeine complexes. One of the particularities of caffeine molecular structure is the presence of three hydrogen bond (H-Bond) acceptors, and the absence of the donors for this type of bonds. Thus, it can interact with the various DNA bases; however, the sites of interaction and the competition of water molecules with the caffeine molecule have not been rigorously evaluated. For the evaluation of DNA interactions with caffeine, the Drew–Dickerson dodecamer has been selected, which, apart from being the first DNA fragment crystallized and studied by X-ray diffraction [10], is the one with its geometric and hydration characteristics thoroughly studied [11, 12].

2 Methods 2.1 Selection of the Drew–Dickerson’s Dodecamer Structure The selected dodecamer was obtained from the Nucleic Acid Data Bank item with NBA ID: NA2708 and PDB ID: 4C64, and the title “Ultra High-Resolution Dickerson–Drew Dodecamer B-DNA” which was obtained by X-ray diffraction 1.3 Å [13]. Its sequence is 5’-D (-CP-GP-CP-GP-AP-AP-TP-TP-CP-GP-CP-G)-3’ as can be observed in Fig. 1.

2.2 Selection and Characterization of the Caffeine Molecule Structure The structure of caffeine was obtained from the ZINC15 database (ID ZINC00001084), which was subjected to geometry optimization using the General AMBER Force Field (GAFF) [14] compatible with the Barcelona Supercomputing Center 1 (BSC1) force field. Caffeine structure is shown in Fig. 2. The atomic types and charges of the caffeine molecule are shown on Table 1.

Conformational Changes of Drew–Dickerson Dodecamer …

407

Fig. 1 Drew–Dickerson dodecamer structure and its sequence Fig. 2 Caffeine structure with atom labels matching the GAFF and BSC1 types of Table 1

2.3 Molecular Docking The binding affinity of the caffeine with the Drew–Dickerson dodecamer was estimated by molecular docking with the Lamarckian genetic algorithm (LGA) in

408

C. Morgado et al.

Table 1 Atom designations for GAFF parameters used for the caffeine molecule Label

Element

Description

Charge

C

Carbon

Inner sp2 carbon in conjugated ring systems

−0.350200

C1

Carbon

sp3 carbon

0.009600

C2

Carbon

Inner sp2 carbon in conjugated ring systems

0.443400

C3

Carbon

Inner sp2 carbon in conjugated ring systems

0.485200

C4

Carbon

sp2

carbon in C = O,C = S

0.765100

C5

Carbon

sp2 carbon in C = O,C = S

0.814500

C6

Carbon

sp3 carbon

0.090300

C7

Carbon

sp3

N

Nitrogen

sp2 nitrogen with 3 subst

−0.148900

N1

Nitrogen

Inner s2 nitrogen in conjugated ring systems

−0.685000

N2

Nitrogen

sp2

nitrogen in amides

−0.483400

N3

Nitrogen

sp2 nitrogen in amides

−0.417300

O

Oxygen

sp2 oxygen in C = O

−0.633500

O1

Oxygen

sp2

oxygen in C = O

−0.637500

HC

Hydrogen

Hydrogen on aromatic carbon with 2 electron-withdrawal groups

0.071100

HC1

Hydrogen

Hydrogen on aliphatic carbon with 1 electron-withdrawal group

0.063367

HC2

Hydrogen

Hydrogen on aliphatic carbon with 1 electron-withdrawal group

0.063367

HC3

Hydrogen

Hydrogen on aliphatic carbon with 1 electron-withdrawal group

0.063367

HC4

Hydrogen

Hydrogen on aliphatic carbon with 1 electron-withdrawal group

0.071700

HC5

Hydrogen

Hydrogen on aliphatic carbon with 1 electron-withdrawal group

0.071700

HC6

Hydrogen

Hydrogen on aliphatic carbon with 1 electron-withdrawal group

0.071700

HC7

Hydrogen

Hydrogen on aliphatic carbon with 1 electron-withdrawal group

0.063367

HC8

Hydrogen

Hydrogen on aliphatic carbon with 1 electron-withdrawal group

0.063367

HC9

Hydrogen

Hydrogen on aliphatic carbon with 1 electron-withdrawal group

0.063367

carbon

0.080300

Note The labels of the different carbons and hydrogens match those shown in Fig. 2, the intermolecular interactions are the standards used in GAFF

Conformational Changes of Drew–Dickerson Dodecamer …

409

AutoDock 4.2 [15]. A box of 30 Å × 40 Å ×145 Å grid points dimension and a spacing of 0.2 Å were used for the search of the interaction points. A flexible docking was used, and the caffeine hydrogens were capable of rotating around N–C bonds, so the molecule would fit into the major and minor grooves of DNA chain with more precision. The docking data were obtained from 50 runs, which were scheduled to end after a maximum mutation rate of 0.02 and the number of energy evaluations of 500,000. The local search rate was set to 0.05 and the size of the population to 80. The docked complex and their interactions with the nucleic acids at the binding sites were analyzed using Avogadro and Visual Molecular Dynamics.

2.4 Molecular Dynamics Simulation Molecular dynamics simulations were performed using the AMBER 2016 software [16], the dodecamer was placed in a parallelepiped with a distance between the edge and the closest atom of the caffeine-DNA complex of 10 Å, a workspace with measures of 55,056 Å long, 54,554 Å wide, and 68,898 Å high, and with a total volume of 206, 934, 529 Å3 . Water molecules were added to the working volume under the TIP3P model; to keep the built system neutral, 22 Na+ were randomly added to the working volume, the position of caffeine was extracted from the position of lowest energy predicted by the previously performed docking. To start the dynamics and imitate the behavior of the dodecamer, conditions of one atmosphere pressure and 5 K temperature were used, using a gradual heating of 0.5 ns before starting the dynamics in these conditions. The Langevin thermostat and the Berendsen barostat were used for the 5 ns simulation, the equilibrium system was inspected by analyzing the data from the root mean square displacement (RMSD), root mean square fluctuation (RMSF), and total system energy data.

3 Results and Discussion 3.1 Molecular Docking Molecular docking was performed between the Drew–Dickerson dodecamer and the caffeine molecule to gain insight into the binding and find the binding site were the molecule couples with the lowest energy to the DNA. Such binding site was found between the 9th and 10th nucleotides, cytosine and guanosine of the minor groove; the interaction involves two H-bonds between one of the oxygen atoms of caffeine and the hydrogen atoms of the nucleic bases, as shown in Fig. 3.

410

C. Morgado et al.

Fig. 3 Hydrogen bonds formed between caffeine molecule and the dodecamer at the binding site

The binding free energy of caffeine and dodecamer is −6.40 kcal/mol, this value of binding energy validates the complex formation. There is another binding site with the energy of −6.34 kcal/mol located between the cytosine and guanosine with the third and fourth position of the other chain, which was expected due to the sequence symmetry of the dodecamer.

3.2 Molecular Dynamics Study Molecular dynamics simulations were used to analyze the stability of the docked complex while interacting with water solvent and to obtain the conformational changes in the dodecamer geometry. After the 5 ns simulation, the RMSD, RMSF, and energy stability were analyzed to secure the equilibrium of the complex. It was found that the RMSD of the caffeine–DNA complex showed a deviation within a range of 0.8–1.4 Å due to the flexibility of the hydrogen atoms in the caffeine molecule and the dodecamer chain. The RMSF profiles of binding sites of the interaction were analyzed, first of the dodecamer without the caffeine, and then of its complex with the caffeine molecule, considering the C, P, O atoms implicated with the formation of the torsion angles of the DNA chain. The reduction of the fluctuations in the second case shows the stability of the complex. For further information, the RMSF of the two nucleotides involved in the binding site was analyzed, showing a great affinity within the caffeine and the dodecamer, as also was observed in the previous Monte Carlo simulations [17]. The torsion angles were estimated at the equilibrium phase of the simulation of the dodecamer, and we found that the torsion angles barely change as compared to its crystal structure; the obtained values are shown in Table 2. The torsion angle values of sugar-phosphate backbone structure for the dodecamer–caffeine complex are presented in Table 3. Throughout the simulation the torsion angles of the sugar-phosphate backbone showed changes within the range typical for the B-forms of DNA; no noticeable

Conformational Changes of Drew–Dickerson Dodecamer …

411

Table 2 Torsion angle values of the dodecamer after 5 ns of molecular dynamics simulation in water at one atmosphere pressure and a temperature of 5 K (without the caffeine) Residue

Angle (°) α

β

γ

δ

ε

ζ

χ

1C





300.80

154.80

−177.60

−85.50

−101.50

2G

−66.00

163.50

55.30

86.80

−177.10

−85.30

−128.30

3C

−67.10

165.30

61.30

99.70

−184.40

−91.20

−119.40

4G

−59.00

182.50

60.10

150.60

−198.90

−86.80

−87.40

5A

−68.40

193.10

54.10

138.50

−186.00

−91.20

−95.30

6A

−65.70

173.30

53.40

98.50

−182.30

−88.00

−126.70

7T

−55.40

162.30

63.90

122.90

−173.80

−108.80

−130.70

8T

−65.90

174.90

59.60

136.80

−181.10

−88.00

−112.50

9C

−51.50

154.80

70.70

81.80

−131.00

−89.10

−163.10

10 G

−73.60

152.50

60.90

152.30

−97.50

−196.40

−88.10

11 C

−90.70

156.60

51.10

131.20

−176.80

−92.30

−123.90

12 G

−63.00

178.90

54.70

143.60





−97.30

24 G

−70.80

184.80

47.90

84.50





−132.20

23 C

−52.80

114.70

49.20

79.20

−165.40

−71.90

−144.90

22 G

−80.90

168.60

46.60

136.40

−136.30

−199.10

−77.00

21 C

−61.60

174.90

61.60

138.90

−153.40

−80.20

−115.90

20 T

−66.70

161.00

60.90

87.60

−179.90

−82.40

−143.10

19 T

−58.90

162.70

58.30

109.30

−174.80

−88.60

−127.20

18 A

−66.70

183.00

53.20

136.40

−177.30

−92.60

−105.10

17 A

−56.20

201.00

49.10

142.10

−187.90

−90.80

−102.00

16 G

−62.20

172.50

67.20

144.40

−196.00

−93.90

−105.20

15 C

−70.10

169.20

59.10

83.50

−173.80

−79.90

−136.50

14 G

−65.80

187.30

46.40

139.70

−184.60

−99.80

−98.20

13 C





58.30

136.60

−181.70

−94.20

−116.70

Mean

−65.40

169.90

56.60

121.50

−171.70

−98.90

−115.80

±SD

8.90

17.60

6.50

26.30

23.30

32.80

20.90

deformation of the dodecamer was observed. However, in the base at the 9C position there are substantial changes in the torsion angles, with an increase of 11.5° in the angle α, a decrease of 26.1° in the angle β, a 17.7° increase in γ angle, a 55.4° decrease in δ angle, a 33° increase in ε angle, a slight 5.5° decrease in ζ angle, and a 43.6° decrease in χ angle. For the angle ζ no significant change is observed. The changes that occur in the angles α, β, and ε are larger than a standard deviation, for the angle δ the change doubles its standard deviation, while for γ and χ angles the change triples the deviation.

412

C. Morgado et al.

Table 3 Torsion angle values of the dodecamer–caffeine complex after 5 ns of molecular dynamics simulation in water at one atmosphere pressure and a temperature of 5 K Residue

Angle (°) α

β

γ

δ

ε

ζ

χ

1C





296.20

149.00

−157.80

−126.10

−106.40

2G

−67.10

177.70

43.50

145.80

−164.50

−108.80

−100.30

3C

−70.00

157.50

57.30

97.00

−178.10

−86.00

−122.20

4G

−64.80

173.30

60.50

145.40

−167.10

−134.10

−104.70

5A

−63.00

174.30

57.60

148.10

−171.10

−86.00

−106.30

6A

−67.30

160.90

55.40

102.00

−180.80

−90.30

−128.40

7T

−56.40

163.40

63.80

95.20

−175.60

−86.90

−148.30

8T

−61.50

171.30

62.60

128.90

−184.00

−85.20

−121.50

9C

−64.20

180.10

54.90

110.00

−168.40

−88.00

−129.00

10 G

−64.10

162.60

52.60

144.80

−119.40

−189.30

−98.80

11 C

−72.60

153.40

51.50

134.50

−169.40

−86.80

−126.40

12 G

−67.40

174.10

48.60

139.40





−99.50

24 G

−73.40

180.90

53.00

88.40





−129.70

23 C

−57.50

126.50

51.30

81.00

−160.20

−72.80

−147.50

22 G

−79.90

162.30

48.20

128.20

−146.30

−170.20

−98.90

21 C

−61.70

174.60

60.20

140.90

−157.80

−85.10

−116.50

20 T

−66.40

169.30

56.60

123.30

−178.00

−107.00

−122.80

19 T

−62.00

177.60

55.70

117.70

−172.60

−94.50

−129.70

18 A

−64.90

174.70

55.50

126.90

−179.90

−82.00

−120.90

17 A

−59.80

165.40

59.50

124.50

−174.70

−108.40

−119.10

16 G

−61.10

175.50

60.30

140.40

−182.50

−91.10

−102.50

15 C

−64.20

165.50

62.40

98.30

−174.20

−90.20

−130.10

14 G

−72.50

182.10

52.40

136.00

−185.00

−101.30

−97.90

13 C





62.10

120.90

−170.90

−90.80

−125.00

Mean

−65.50

168.30

55.90

123.60

−169.00

−102.80

−118.00

±DS

5.60

12.20

5.20

20.30

14.80

29.10

15.00

4 Conclusions The simulations of Drew–Dickerson dodecamer with caffeine have shown that caffeine can interact with various DNA sequences. The binding zone has been revealed where caffeine–DNA complex has greater stability, and despite a long time of simulation the hydrogen bonds with DNA bases are maintained, thus the solvent molecules cannot remove caffeine from the interaction site at the conditions considered.

Conformational Changes of Drew–Dickerson Dodecamer …

413

Evidence has been found that caffeine molecule presence deforms the structure of the dodecamer, with changes up to three times the average standard deviation of the chain. The comparison of the results obtained using the molecular dynamics with those reported using Monte Carlo method previously revealed the resemblance of the ligand position in the groove. Acknowledgements The authors gratefully acknowledge the Laboratorio Nacional de Supercomputo del Sureste de Mexico, a member of the CONACYT network of national laboratories, for computer resources, technical advice, and support.

References 1. Ashihara H, Crozier A (2001) Caffeine: a well-known but little mentioned compound in plant science. Trends Plant Sci 6:407–413. https://doi.org/10.1016/S1360-1385(01)02055-6 2. Liguori A (1997) Absorption and subjective effects of caffeine from coffee, cola and capsules. Pharmacol Biochem Behav 58:721–726. https://doi.org/10.1016/S0091-3057(97)00003-8 3. Hashimoto T, He Z, Ma W-Y, Schmid PC, Bode AM, Yang CS, Dong Z (2004) Caffeine inhibits cell proliferation by G0/G1 phase arrest in JB6 cells. Can Res 64:3344–3349. https://doi.org/ 10.1158/0008-5472.CAN-03-3453 4. Hill GM (2011) Attenuation of cytotoxic natural product DNA intercalating agents by caffeine. Sci Pharm 79:729–747. https://doi.org/10.3797/scipharm.1107-19 5. Cortez D (2003) Caffeine inhibits checkpoint responses without inhibiting the AtaxiaTelangiectasia-mutated (ATM) and ATM- and Rad3-related (ATR) protein kinases. J Biol Chem 278:37139–37145. https://doi.org/10.1074/jbc.M307088200 6. Daniel R, Marusich E, Argyris E, Zhao RY, Skalka AM, Pomerantz RJ (2005) Caffeine inhibits human immunodeficiency virus type 1 transduction of nondividing cells. J Virol 79:2058–2065. https://doi.org/10.1128/JVI.79.4.2058-2065.2005 7. Guan L, Disney MD (2012) Recent advances in developing small molecules targeting RNA. ACS Chem Biol 7:73–86. https://doi.org/10.1021/cb200447r 8. Sheng J, Gan J, Huang Z (2013) Structure-based DNA-targeting strategies with small molecule ligands for drug discovery: targeting DNAs VIA structural investigations. Med Res Rev 33:1119–1173. https://doi.org/10.1002/med.21278 9. Tornaletti S, Russo P, Parodi S, Pedrini AM (1989) Studies on DNA binding of caffeine and derivatives: evidence of intercalation by DNA-unwinding experiments. Biochimica et Biophysica Acta (BBA)-Gene Structure and Expression 1007:112–115. https://doi.org/10. 1016/0167-4781(89)90138-3 10. Drew HR, Wing RM, TakanoT BC, Tanaka S, Itakura K, Dickerson RE (1981) Structure of a B-DNA dodecamer: conformation and dynamics. Proc Natl Acad Sci USA 78(4):2179–2183. https://doi.org/10.1073/pnas.78.4.2179 11. Lercher L, McDonough MA, El- AH, Thalhammer A, Kriaucionis S, Brown T, Schofield CJ (2014) Structural insights into how 5-hydroxymethylation influences transcription factor binding. Chem Commun 50:1794–1796. https://doi.org/10.1039/C3CC48151D 12. Dickerson RE, Drew HR (1981) Structure of a B-DNA dodecamer. II. Influence of base sequence on helix structure. J Mol Biol 149(4):761–86. https://doi.org/10.1016/0022-283 6(81)90357-0 13. Drew HR, Dickerson RE (1981) Structure of a B-DNA dodecamer. III. Geometry of hydration. J Mol Biol 151(3):535–556. https://doi.org/10.1016/0022-2836(81)90009-7. (25 Sept 1981)

414

C. Morgado et al.

14. Wang J, Wolf RM, Caldwell JW, Kollman PA, Case DA (2004) Development and testing of a general amber force field. J Comput Chem 25:1157–1174. https://doi.org/10.1002/jcc.20035 15. Trott O, Olson AJ (2009) AutoDock Vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization, and multithreading. J Comput Chem NA-NA. https://doi.org/10.1002/jcc.21334 16. Case DA, Betz RM, D.S. Cerutti, Cheatham T, Darden T, Duke RE, T.J. Giese, Gohlke H, Götz AW, Homeyer N, Izadi S, Janowski PA, J. Kaus, Kovalenko A, Tai-Sung Lee, S. LeGrand, P. Li, C. Lin, Luchko T, Luo R, B. Madej, D. Mermelstein, Merz K, Monard G, Nguyen H, Nguyen H, I. Omelyan, Onufriev A, Roe DR, Roitberg AE, Sagui C, Simmerling C, BotelloSmith WM, Swails JM, R.C. Walker, J. Wang, R.M. Wolf, Xiongwu Wu, L. Xiao, Kollman PA (2016) Amber 16, University of California, San Francisco. https://doi.org/10.13140/RG.2. 2.27958.70729 17. Kalugin MD, Teplukhin AV (2009) Parallel Monte Carlo study on caffeine-DNA interaction in aqueous solution. In: 2009 ieee international symposium on parallel & distributed processing. IEEE, Rome, Italy, pp 1–8. https://doi.org/10.1002/jcc.20035

An Adaptive Replacement Strategy LWIRR for Shared Last Level Cache L3 in Multi-core Processors Narottam Sahu, Banchhanidhi Dash, Prasant Kumar Pattnaik, and Anjan Bandyopadhyay

Abstract Multi-core processors from different processor design companies such as Intel; AMD introduces shared cache memory architecture to improve the performance and better resource utilization. Most of the modern processors from Intel and AMD processor used traditional replacement techniques such as LRU (Least recently used), pseudo-LRU for the eviction of a cache line from the low level shared L3 cache. The shared last level cache L3 vary in cache capacity, associatively for state of art processors and it is required to use an adaptive dynamic replacement technique for better utilization of shared cache memory as the old replacement techniques lead to poor performance at shared last level cache with more memory intensive workloads with different access pattern. In this manuscript we have proposed a state of art replacement strategy LWIRR by using the access pattern of memory workloads from different complex applications from CPU-2017 and CPU-2006 benchmark and simulated with our developed trace driven multi-core simulator with number of memory address traces for shared L3 cache configurations. We reveal that our proposed replacement algorithm outperforms LRU and found that the LWIRR shows better performance in terms of cache hit rate, Instruction per cycle and execution time. Keywords Multi-core · L3 cache · LRU · Benchmark · Multilevel cache hierarchy · Execution time

N. Sahu · B. Dash · P. K. Pattnaik · A. Bandyopadhyay School of Computer Engineering, KIIT University, Bhubaneswar, India e-mail: [email protected] A. Bandyopadhyay (B) Kiit University, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_31

415

416

N. Sahu et al.

1 Introduction The cache replacement algorithms are indeed required and play a very important role in the entire system architecture processing of any processor. Cache memory performance ameliorates by keeping latest instruction and data in lower address memory locations that will take a few number of clock cycle to access than normal random access memory or main memory. The cache is full of block; the algorithm must select which cache line to discard to make a space for the new ones. Many of cache replacement policies are there; just depending upon the system the following any one is chosen and implemented on it. The main reason to do this is that it helps in efficient processing of data and the other processes within it. They are able to remove and add data in the cache whenever required. Whenever desired information is present in cache this is called cache hit and time taken to get it is called hit latency. If block is not in cache, then block will be transferred from main memory to cache; this situation is called cache miss. The problem of all types of cache misses such as compulsory, conflict, and capacity all the problems are taken care by the replacement algorithms. They order the processes in such a manner that they would be able to manage them efficiently and this leads to proper cache optimization, i.e., better cache hit, less cache miss, lower hit latency. Multi-core processors are designed with innovative architecture that improves performance by adding processing power low latency. More importantly multi-core processor plays a big role in high-performance computing.

2 Literature Review When replacement of a block would take place the block is replaced from the private cache parts efficiently [1]. Cache memory is a rapid memory which is utilized to diminish the speed hole between the slow memory and quick processor. It isn’t as costly as registers. Whenever CPU/processor utilizes a few information or snippet of data it is duplicated into some quicker stockpiling media like cache [2]. When processor attempt to move toward a specific snippet of data again, the framework checks it in reserve first, if it is in cache, processor uses it from that point if not found in store it must be brought from main memory and duplicated into reserve expecting we will require it again. There exists a reasonable separation between the processor speed and the memory get to latency, so an extraordinary exertion has been placed in such manner to lessen this gap. There is parcel of work never really come this hole is equipment based, compiler based and OS based [2]. The new victim L3 cache of Cascade Lake SP and its advanced replacement policy receive due attention. The entire research of this HPC is done from a vast research field and its related work is taken by consulting from nine other reputed research papers in the similar domain [3–6]. In this paper author have presented a new replacement policies pseudo-LIFO for L3 the last level caches. The replacement strategy follows the design of three

An Adaptive Replacement Strategy LWIRR for Shared Last Level …

417

concepts. The dead block prediction LIFO, probabilistic escape LIFO, dynamically learns the most preferred eviction and probabilistic counter LIFO, it is seen that the probabilistic escape LIFO less percentage of storage, outperforms with respect to proposals on multi-threaded workloads [7–9]. Inclusive low-level cache basically ways precious silicon state so cross level replication of cache block is used it has bigger performance losses compared to to inclusive LLC. Exclusive LLC design is more complex and inclusive LLC gather filtered access history which is is not possible in exclusive design this is mainly because the block is evicted from LLC on a hit. So popular l r u is ineffective. Gaur et al. has proposed a new algorithm for replacement and it is not important to fill every block into an exclusive LLC [10, 12]. It is is not possible to implement in inclusive because that would violate inclusion. They have used insertion and bypass algorithm for exclusive LLC and got an improvement in performance. Exclusive LLC basically allocates the block on an eviction from the inner level cache and then it is removed when a hit in the block is recalled by the inner level cache. The Victim selection algorithm LLC bypass algorithm is used in exclusive designs which improve performance by allocating the LLC capacity only to the blocks with the relative reuse distances. By selective by passing at in certain age assignment this improves the average IPC of 97 single credit traces 3.4% over LRU [5]. In this paper, author has found that LRU performance is not good with weak locality workloads and proposed a new policy low inter-reference regency (LIRS) in last level cache to improve LRU performance [11, 13].

3 Proposed Work This paper proposes an adaptive dynamic cache replacement algorithm LWIRR (Low weight Inter-reference Recency) that set a weight for each of the cache blocks based on their inter-reference frequency, frequency, and recency value during the time of replacement from shared last level cache L3. CPU latency needs to be reduced that why our technique LWIRR also considers the inter-reference recency and performs the partitioning of the cache blocks to dynamically adjust the size of high_Priority_segment (HPS) and low_priority_segment (LPS) based on the weightLWIRR .weightLWIRR with small value will be placed in low_priority segment and the corresponding cache line is to be evicted. Table 1 explains how a cache line is selected for eviction by the proposed algorithm LWIRR. An tick ✓ mark means the cache line of the row is referenced at the time in a sequence 1 to 10.Therecency_ cachelineand IRR_cacheline and frequency_cacheline columns represent the corresponding values at time 10 for each cache line. As per LRU, the cacheline line_ 0004 is to be replaced by a new block because its recency value is 0 but as per the proposed approached LWIRR the line evicted is the cache block line_0000 based on the value of weight_cacheline and this method of eviction based on inter-reference recency shows better cache hit as compared to LRU method for a sequence of memory references. Section 3 section explains the proposed adaptive algorithm LWIRR.



1

cache line/ time

line_ 0000

line_0000

line_0001

line_0002

line_0003

line_0004

CACHELINE

2



line_ 0003

3



line_ 0001

4



line_ 0002

5



line_ 0001

6



line_ 0000

7



line_ 0002

8



line_ 0000

9

10

1

3

4

2

0

✓ Blocked to be replace with (LRU)

Recency_ cacheline

line_ 0004

Table 1 A sample showing the procedure of replacing a cache line using the proposed LWIRR policy

1

1



3



IRR_ Cacheline

3

2

1

2

1

Frequency_ cacheline

0

2

0

4

1

Block to be replaced using (LWIRR)

Weight_cacheline

418 N. Sahu et al.

An Adaptive Replacement Strategy LWIRR for Shared Last Level …

419

3.1 Proposed Adaptive Replacement Technique LWIRR The proposed algorithm shown in Table 2 shows the procedure of replacing a block from L3 cache and also represents the method of eviction from L1 and L2 by considering the traditional LRU algorithm. The algorithm gives the updated weight of each cache block in L1, L2, and L3 cache and decision is taken for the eviction of an existing block from all the levels of cache by maintaining the coherency at different levels with respect to all the cores present in the multi-core processor.

3.2 Working of LWIRR Algorithm The algorithm includes Recency_i, Frequency_i , and Inter_ref_recency_Irr_i represent the recency, frequency and no of other unique cache line accessed between two consecutive references to the cache line ‘i’ respectively. buffer_counter_L1(Recency_i,Frequency_i, Inter_ref_recency_Irr_i) is a function, which updates the frequency, recency and Inter_ref_recency_Irr_i of cache line ‘i’ in L1. buffer_counter_L2(Recency_i,Frequency_i, Inter_ref_recency_Irr_i) is a function, which updates the frequency, recency and Inter_ref_recency_Irr_I of cache line ‘i’ in L2. buffer_counter_L3(Recency_i,Frequency_i, Inter_ref_recency_Irr_i) is a function, which updates the frequency, recency and Inter_ref_recency_Irr_i of cache line ‘i’ in L3. LWIRR(L3): Represents the proposed adaptive replacement to find the cache line from L3 which is to be evicted. LRU(): finds the cache line from L1 using least recency that to be evicted from L1 cache. COHERENT_L1L2(L1, L2): preserves coherency between cache levels L1 and L2 after a block being removed from either L1 or L2. The processor needs a word to be transferred from a referenced memory location. The tag bit field of 32-bit address is compared word with the tag bit fit of L1 cache. If there is a matching of tag in L1 then there is a hit occurring at L1 and the particulars word is transferred to the processor. On a miss in L1 a search in L2 is held and if there is a further miss in L2 search is done in L3, finally if the word is not available in L3 it will perform an access to main memory and referenced word will be given to processor. We have proposed an adaptive replacement strategy in the L3 which is shared by all core, so the cache line eviction depending on its weight in the L3. But to maintain coherency between L1 and L2 we use LRU. The proposed technique in the L3 calculates the respective weight of individual cache line by their recency of reference, frequency of reference, and their inter_reference recency. It can be expressed by using the formula represented by the Eq. (1).

420

N. Sahu et al.

Fig. 2 Replacement decision in different level L1, L2,L3

  Weight cachelinei = Frequencycachelinei /IRRcachelinei ∗ Recencycachelinei

(1)

where Weightcachelinei, Frequencycachelinei, IRRcache linei, Recencycachelinei represent the recency, frequency, and inter-reference recency index of a cacheline ‘i’ respectively. Figure 2 represents how the decision is made for the selection of a cache line from different levels of cache in one of the core core0 of a quad-core processor.

4 Simulation and Testing A generic block diagram of a multi-core processor with an innovative architectural component is represented in Fig. 3. The model shows that it has four independent core with inclusive private L1 and L2 cache and a shared last level L3 cache that can be accessed by each core. We have developed a trace-driven multi-core simulator aimed to get some insight into the effectiveness of shared L3 cache in a multi-core cache memory subsystem. The simulator allows a programmer to configure different memory configurations and different replacement strategies through the selected memory workloads. Table 3 depicts the simulation environment on which we have tested the different cache configurations.

5 Performance Analysis We have simulated our proposed algorithm LWIRR and some existing algorithm LRU with different cache configurations and calculated the statistical information about cache hit rate and the execution time and it is viewed that the proposed LWIRR algorithm outperforms as compared to traditional algorithm. For the analysis of

An Adaptive Replacement Strategy LWIRR for Shared Last Level …

421

Fig. 3 Processor with L1 and L2 private cache and exclusive L3 shared last level cache with 4 core Table 2 Algorithm LWIRR for replacement from L3 cache Algorithm: LWIRR for replacement from L3 cache Start Step1.processor generates logical address and translated into physical address of 32 bit from which requested word to be transferred to processor Step 2.The 32 bit physical address is divided into tag_bit, set_bit and offset_bit Step 3.The bits under tag_bit compared with the tag filed of lower level cache L1 Step 4. For a cache HIT: If(reference_word found in any cache line of L1) invoke a function to update recency,frequency,and IRR: buffer_counter_L1(Recency_i,Frequency_i,Inter_ref_recency_Irr_i) Elseif(reference_word found in any cache line of L2) invoke a function to update recency,frequency,andIRR: buffer_counter_L2(Recency_i,Frequency_i, Inter_ref_recency_Irr_i) Elseif(reference_word is found in any cache line in L3) invoke a function to update recency, frequency,andIRR: buffer_counter_L3(Recency_i, Frequency_i, Inter_ref_recency_Irr_i) Else Step-5 :for A MISS in Cache: i. for the Selection of the cache line for eviction in L3: Call LWIRR(L3) ii. for the Selection of the cache line for eviction inL1 : Call LRU (L1) iii. for the Selection of the cache line for eviction in L2 :Call COHERENT_L1L2(L1, L2) iv. Updatebuffer_counter_L1(Recency_i,Frequency_i,Inter_ref_recency_Irr_i) v. Updatebuffer_counter_L2(Recency_i,Frequecncy_i,Inter_ref_recency_Irr_ vi. Updatebuffer_counter_L3(Recency_i,Frequecncy_i,Inter_ref_recency_Irr_i) End

422

N. Sahu et al.

LWIRR at L3 cache, we have selected 1 K memory references from cpu benchmark 2017/2006 such as perlbenchmark, gcc, crafty, mcf. All the cache are Set Associative caches with 8 or 16 way associative. Simulation was executed with different sizes of L1, L2, and L3and results obtained from the execution of simulator were analyzed.

5.1 HIT Rate Analysis From Table 4 and the graph from Fig. 4, it is clear that with varying L3 cache configuration with our proposed policy LWIRR has an improvement of 1.487% in perlbenchmark, 0.595% for swim, 0.114% for art, 0.096% for MCF against LRU in L3 cache. Table 3 Computer simulation and testing domain PROCESSOR: Intel,[email protected] GHZ

Main memory

Tested processor configuration No. of core

4

No of thread

2*4 core = 8

Bus speed

1.6 GHZ

Instruction set

64 bit

Average power use with high complex workload

15 W

Word length

64 bits

Words by block

256

Block size

2 KB

Main memory size

DDR4-2400, 8 GB

Memory Channel

2

Maximum memory bandwidth 37.5 GB/s L1 CACHE

L2 CACHE (inclusive to L1)

L3 cache (shared last level cache) exclusive to L1 and L2

No. of cache line

128/256/512

Cache size

256 KB

Set-associativity

16

No. of block

512/1024

Cache size

1 MB

Set-associativity

16

No. of block

2048/1024

Cache size

8 MB

Set-associativity

16

An Adaptive Replacement Strategy LWIRR for Shared Last Level …

423

Fig. 4 Hit rate analysis of LWIRR versus LRU

5.2 Execution Time Analysis This section explains about the simulation time of trace-driven simulator with LRU and our proposed algorithm. Table 5 represents the execution time for different benchmarks with varying Cache configuration for 1 K memory references from the results it is found that execution time with varying L3 cache setup in L3 cache is reduced through our replacement than the LRU. Figures 5 and 6 show the graphical analysis of both LRU and LWIRR with varying L3 size and their relative execution time. Table 4 Hit Rate of LRU versus LWIRR and percentage of gain of LWIRR with varying cache size in L1, L2, and L3 with workload of 1 K memory references access pattern of CPU_ 2006/ cpu2017 benchmark L1

L2

32 KB 64 KB

L3

Benchmark

Hit rate_LRU_ Hit rate_ % of Gain in L3 LWIRR_in L3 using LWIRR (Proposed at L3 over LRU Algorithm)

256 KB Swim

87.232

87.827

0.595

512 KB Art

89.872

89.986

0.114

91.348

91.568

0.22

128 KB 512 KB MCF

1 MB

gcc

94.786

94.882

0.096

256 KB 1 MB

Crafty

96.346

95.389

–0.957

512 KB 8 MB

Perlbenchmark 97.389

98.876

1.487

424

N. Sahu et al.

Table 5 Execution Time for different benchmarks with varying Cache configuration for 1 K memory references L1

L2

L3

Benchmark CPU_2006/2017

Execution time_in MS_ LRU

Execution_ time_MS_ LWIRR

speedup_ LWIRR

32 KB

64 KB

256 KB

Swim

0.293

0.285

0.972696246

512 KB

Art

0.438

0.423

0.965753425

1 MB

gcc

0.399

0.426

1.067669173

128 KB

512 KB

MCF

0.242

0.257

1.061983471

256 KB

1 MB

Crafty

0.326

0.393

1.205521472

512 KB

8 MB

Perlbenchmark

0.353

0.277

0.78470255

Fig. 5 Execution time analysis of LRU versus LWIRR with different benchmarks and varying cache size

6 Conclusion and Future Work We have focused on the last level shared L3 cache of a multi-core processor and used an adaptive replacement strategy and outperforms LRU in hit rate and execution time with varying workload and access pattern. The simulation results validate that the proposed replacement policy shows better performance through different parameters like hit rate, execution time with different benchmarks. Further, we will focus to be compared with other advanced replacement algorithms so that it can be accepted

An Adaptive Replacement Strategy LWIRR for Shared Last Level …

425

Fig. 6 Execution time analysis of LRU versus LWIRR with respective benchmark with increasing L3 size

globally with more significance to make better utilization of shared L3 cache size by different cores in a multi-core heterogeneous processor.

References 1. Shukla S, Chaudhuri M (2017) Tiny directory: efficient shared memory in many-core systems with ultra-low-overhead coherence tracking. In: 2017 IEEE international symposium on high performance computer architecture (HPCA). IEEE, pp 205–216 2. Javaid Q, Zafar A, Awais M, Shah MA (2017) Cache memory: an analysis on replacement algorithms and optimization techniques. Mehran Univ Res J Eng Technol 36(4):831–840 3. Alappat CL, Hofmann J, Hager G, Fehske H, Bishop AR, Wellein G (2020) Understanding HPC benchmark performance on Intel Broadwell and Cascade Lake processors. In: international conference on high performance computing. Springer, Cham, pp 412–433 4. Chaudhuri M (2009) Pseudo-LIFO: the foundation of a new family of replacement policies for last-level caches. In: 2009 42nd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, pp 401–412 5. Gaur J, Chaudhuri M, Subramoney S (2011) Bypass and insertion algorithms for exclusive last-level caches. In: Proceedings of the 38th annual international symposium on Computer architecture, pp 81–92 6. Jiang S, Zhang X (2002) LIRS: an efficient low inter-reference recency set replacement policy to improve buffer cache performance. ACM SIGMETRICS Perform Evaluat Rev 30(1):31–42 7. Zebchuk J, Makineni S, Newell D (2008) Re-examining cache replacement policies. In: 2008 IEEE international conference on computer design. IEEE, pp 671–678 8. Navarro-Torres A, Alastruey-Benedé J, Ibáñez-Marín P, Viñals-Yúfera V (2019) Memory hierarchy characterization of SPEC CPU2006 and SPEC CPU2017 on the Intel Xeon Skylake-SP. PLoS ONE 14(8):e0220135 9. Beckmann N, Sanchez D (2016) Modeling cache performance beyond LRU. In: 2016 IEEE international symposium on high performance computer architecture (HPCA). IEEE, pp. 225– 236

426

N. Sahu et al.

10. Xiao X, Wu ZC, Chou KC (2011) A multi-label classifier for predicting the subcellular localization of gram-negative bacterial proteins with both single and multiple sites. PLoS ONE 6(6):e20592 11. Shukla S, Chaudhuri M (2017) Sharing-aware efficient private caching in many-core server processors. In: 2017 IEEE international conference on computer design (ICCD). IEEE, pp 485–492 12. Kim S, Fayazi M, Daftardar A, Chen KY, Tan J, Pal S, Kim HS (2022) Versa: A 36-core systolic multiprocessor with dynamically reconfigurable interconnect and memory. IEEE J Solid-State Circuits 57(4):986–998 13. Axtmann M, Witt S, Ferizovic D, Sanders P (2022) Engineering in-place (shared-memory) sorting algorithms. ACM Trans Parallel Comput 9(1):1–62

ZnO Nanoparticles Tagged Drug Delivery System S. Harinipriya and Kaushik A. Palicha

Abstract The ZnO tagged neurotransmitter and blood thinner such as citicholine and ecospirin synthesized by wet chemical method were studied for its structural, optical and electrochemical properties. The in-vitro analysis of the drug delivery system was conducted via parameter estimations from tests such as complete blood count, platelets, leucocytes, prothrombin time etc. to determine the effect on the blood of a stroke affected patient. The results indicate that, the tagged drug delivery system studied here can be a viable option as targeted neurotransmitter & blood thinner delivery with Zn supplement to improve brain functions in stroke affected individuals. Keywords ZnO nanoparticles · Drug delivery · Neurotransmitters

1 Introduction Biomedical application of ZnO nanomaterials have recently been explored in the field of cancer therapy [1, 2]. ZnO based nanoparticles have considered as promising anticancer agents and have been used in photodynamic therapy [1, 2]. Specific properties and characteristics of ZnO, such as biocompatibility, their inherent toxicity against cancerous cells, their ability to induce intracellular ROS generation, ease of functionalization, easy synthesis make them an appealing candidate for biomedical applications. Several research groups have reported chemotherapeutics based on ZnO [2–5]. Drug loading, release mechanism and cytotoxicity of various ZnO-drug composites have also been studied [6–9]. In the present investigation, polycrystalline ZnO is studied as drug delivery system for neurotransmitter and tested in blood samples. The future studies in this line would involve in-vitro studies of MTT and TUNEL assays to determine the cytotoxicity and cell viability on human brain cells. Subsequent to the in-vitro analysis will be testing of the drug delivery system S. Harinipriya (B) · K. A. Palicha RamCharan Company Private Limited -Entity1, Chennai, Tamil Nadu 600 002, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_32

427

428 Table 1 The physiochemical properties of ZnO

S. Harinipriya and K. A. Palicha

Property

Unit

Value

Percentage (as ZnO)

%

93 ± 2

Total soluble salts

%max

2.0

Sieve residue 325 mesh

%max

Nil

Bulk density

gm/cc

0.4–0.6

Surface area

m2 /gm

40–50

Moisture

%max

1.0

Loss on ignition

%max

5.0

pH value



6.5–8.5

Lead (Pb)

ppm max

20

Cadmium (Cd)

ppm max

3

Iron (Fe)

ppm max

50

Copper (Cu)

ppm max

Traces

Mangnese

ppm max

Traces

Free Zinc (Anthraquinone test)



Absent

in rabbits or rats to determine the feasibility as drug to be administered in a targeted fashion to the brain cells of humans to improve the brain functions and motor skills in specially abled children and adults, stroke patients with motor disfunctionality. Physiochemical Properties of ZnO The Table 1 shows the physiochemical properties of ZnO. Structural Analysis of ZnO Tagged Citicholine and Ecosprin I (Pharmaceutical Formulation). Figure 1 shows the XRD analysis results of the pharmaceutical composition of the neurotransmitter citi choline and blood thinner. The XRD peaks agree with the literature for CITI CHOLINE and ECOSPIRIN as provided in Tables 2 and 3. The pharmaceutical formulation of citi choline capsules also possessed a small dosage of ecospirin inside. The structural characterization of the same is provided in Table 3; Optical Analysis of ZnO Tagged Citicholine (Pharmaceutical Formulation) FTIR Analysis The FTIR analysis of the citicholine pharmaceutical formulation tagged to ZnO is provided in Figs. 2, 3 and 4 and Table 4. UV–Visible Spectral Analysis of Citicholine Pharmaceutical Formulation Figure 5 shows the visible spectra of Citicholine pharmaceutical formulations with the following details.

ZnO Nanoparticles Tagged Drug Delivery System

429

Fig. 1 XRD analysis of the pharmaceutical composition of the neurotransmitter citi choline and blood thinner Table 2 The XRD peaks and the corresponding geometry of the pharmaceutical formulation of citicholine S. no

Citimet peaks (2θ)

Remarks

1

16.3899

Unidentified peaks, may be colorant

2

18.1244

3

19.8589

4

19.9167

5

21.8825

6

23.2990

7

24.8601

8

25.9297

9

27.3173

10

29.2541

11

29.3698

12

31.6246

13

34.5733

Corresponds to 101 peaks of TiO2

14

37.5509

Corresponds to 004 peak of TiO2

15

46.5703

Corresponds to 200 peaks of TiO2

16

55.9366

Corresponds to 211 peaks of TiO2

Corresponds to 20.5080, 21.3300, 22.2200, 22.3660 highly sharp crystalline peaks for Citi choline

Corresponds to 101 peaks of TiO2 Corresponds to 110 peaks of TiO2

430 Table 3 The XRD analysis of ecospirin tablet in the pharmaceutical formulation of citi choline

S. Harinipriya and K. A. Palicha

S.no

Small tablet inside citimet peaks (2θ)

Remarks

1

7.9487

2

8.0932

Corresponds to 8.78 peak of Clopidogrel

3

9.7988

Corresponds to 9.62 peak of Clopidogrel

4

15.8117

5

15.8696

Corresponds to 15.59 peak of Aspirin

6

20.9574

Corresponds to 20.64 peak of Aspirin

7

22.7208

Corresponds to 22.61 peak of Aspirin

8

22.8943

Corresponds to 23.00 peak of Clopidogrel

9

23.4725

Corresponds to 23.20 peak of Aspirin Corresponds to 23.26 peak of Clopidogrel

10

27.1727

Corresponds to 27.08 peak of Clopidogrel

11

28.9651

Corresponds to 110 peaks of TiO2

12

32.9833

Corresponds to 32.73 peak of Clopidogrel

Fig. 2 FTIR spectrum of small tablet inside the citicholine capsule with light orange coating

ZnO Nanoparticles Tagged Drug Delivery System

431

Fig. 3 FTIR spectrum of capsule outer of citicholine (Red and Green)

Fig. 4 FTIR spectrum of white powder inside citicholine capsule

Citicholine: Peak at 283 nm due to the diphosphocholine of Citicoline. Ecosprin (I)—inside the Citicholine formulaiton. Broad peak from 240–290 nm. This is attributed to the presence of (i) Aspirin, (ii) Clopidogrel, (iii) Atorvastatin. (i). Aspirin shows peak at 265 nm (ii). Clopidogrel shows peak at 254 nm

432

S. Harinipriya and K. A. Palicha

Table 4 The fingerprint functional group identification and the chemical composition by FTIR S no

DND

Peaks obtained (WN)

Remarks

1

Small tablet inside the citicholine capsule with light orange coating

Broad peaks with multiplets at 3021.91, 2894.63, 2698.89, 2544.61, 1753.94, 1691.27, 1605.45, 1457.92, 1370.18, 1307.50, 1167.94, 1093.44, 1012.45, 916.986, 833.098, 754.995, 666.285, 598.789, 514.901, 422.334

WN

AFG

3450

OH Carboxylic

2

3

2954, 3077, 3119

Chlorophenyl CH Stretch

2464

C–S–C stretch

1752

C=O carboxylic

1474,1496

Chlorophenyl ring stretch

1439, 1383

Pyridine methylene wag

1188

C–O carboxylic acids

1298, 1275

Methylene twist

1221

Chlorophenyl C–Cl stretch and bends

1154

Pyridine ring stretch

1068, 1028, 1014, 993

Pyridine-methylene rock

749, 724, 697

Chlorophenyl spatial bend

Capsule outer 3444.24, of citicholine 2924.52, (red and green) 2854.13, 2363.34, 2116.49, 1648.84, 1456.96, 1161.90, 1025.94, 696.177, 465.725, 436.798

WN

AFG

WN

AFG

3444.24

−NH

2924.52 2854.13

C–H stretching Aliphatic 10 amine

1395.28

C–N

2116.49

C=O saturated acyclic

1648.84

C=N

1456.96

C=O saturated acyclic

White powder inside the citicholine capsule

Presence of three functional groups. A benzene ring (aromatic group), a carboxylic acid (COOH) group and an ester (R–C=O–O-R) group. Very broad peak at 3100 to 3300 cm−1 proves the presence of COOH group. The compound has C=O (acid), C=O (ester), C–O (acid), C–O (ester), O–H, –CH3, benzene ring and H3 PO4 functional groups. This indicates the presence of cytidine diphosphate and choline moieties in the pharmaceutical composition of citi choline

3369.03, 2899.45, 1652.70, 1462.74, 1164.79, 1115.62, 1055.84, 1029.80, 989.304, 872.631, 710.64, 614.217, 434.869

Legend—SN: Serial Number, DND: Drug Name or Description, WN: Wave Number in cm−1 , AFG: Assigned Functional Group

ZnO Nanoparticles Tagged Drug Delivery System

433

Fig. 5 UV–visible spectra of Citicholine pharmaceutical formulations

(iii). Atorvastatin shows maxima at 237 nm and minima at 261 nm So, all peaks combine and show a broad spectrum around 240 to 300 nm. Ecosprin (I)—inside the Citimet capsule. Peaks at 396 nm, Peaks at 396 nm is attributed to the presence of TiO2 as pigment. Electrochemical Analysis of Zno Tagged Citicholine Pharmaceutical Formulation.

2 Cyclic Voltammetric Analysis of ZnO-Citicholine (Specifically Ecospirin Inside the Capsule of Citicholine) Figure 6 shows the cyclic voltammogram of Ecospirin (pharmaceutical composition) tagged to ZnO coated on Pt working electrode. The pharmaceutical combination of Ecospirin is a mixture of Aspirin 75 mg, Atorvastatin 10 mg and Clopidogrel with TiO2 as pigment and orange dye. From Table 5, the values of redox potential and current at 100 mV s-1 scan rate (cf. Fig. 6) agrees satisfactorily with the literature[10]. In the literature, the oxidation potential of aspirin occurs at 0.7 V on edge plane pyrolytic graphite modified with graphene with respect to Ag/AgCl reference electrode. In the present investigation oxidation peak is observed at 0.65 V versus SCE for ZnO tagged with Ecospirin drop

434

S. Harinipriya and K. A. Palicha

Fig. 6 Cyclic Voltammogram of Ecospirin (pharmaceutical composition) tagged to ZnO coated on Pt working electrode, Saturated Calomel reference electrode and Pt wire counter electrode, in 1 M KCl electrolyte at different scan rates from 1 to 200 mVs-1 in the potential window of –1.2 to 1.2 V

casted on Pt working electrode. The shift of 50 mV from the literature value may be attributed to the change in the reference electrode from Ag/AgCl (literature) to SCE (present studies). The rationale could also be due to the interaction between the drug carrier (Bulk ZnO) and the drug (Ecospirin). Two small humps were noticed at 0.8 and 1.05 V in the forward scan. The peak at 0.8 V could be due to oxidation of atorvastatin. The literature iterates that atorvastatin oxidizes on glassy carbon electrode at 1.004 V (vs Ag/AgCl) at 100 mV/s [11, 12]. The shift in the oxidation potential of atorvastatin to the lower side of 204 mV in the present studies could be attributed to the change in (i) working electrode from GCE to Pt coated with ZnO-Ecospirin via drop casting method, (ii) the reference electrode from Ag/AgCl to SCE and (iii) weak interaction between ZnO and Ecospirin. The oxidation peak observed at 1.05 V accounts for the clopidogrel base present in the pharmaceutical formulation. The observed oxidation potential of clopidogrel is identical to that of the literature value[13]. Based on the above observations in the Cyclic Voltammogram of ZnO-Ecospirin drop casted on Pt electrode, the plausible mechanism for the degradation of the drug can be attained as follows:

ZnO Nanoparticles Tagged Drug Delivery System

435

Table 5 Comparison of the scan rate, peak potential, and peak current for Ecospirin 75 at different scan rates (SR) measured in mVs−1 Sl. no

SR

Voltage (V)

Current (mA)

1

1

Forward Scan: Peak @ − 0.2; Hump @ 0.6

−0.6; -0.5

Reverse Scan: Hump @ − 0.6

−0.4

Forward Scan: Peak @ − 0.01; Hump @ 0.15; Hump @ 0.75

− 0.3; − 0.5; − 0.4

Reverse Scan: Hump @ − 0.5

− 0.6

Forward Scan: Peak @ 0.1; Hump @ -0.9; Hump @ 0.25; Hump @ 0.9

− 0.2; − 0.7; − 0.2; − 0.4

Reverse Scan: Hump @ 0.9; Hump @ − 0.6; Hump @ -0.9

− 0.5; − 0.7; − 0.8

Forward Scan: Peak @ 0.2; Hump @ 0.99; Hump @ 0.4; Hump @ 0.5 Hump @ 1.05

− 0.2; − 0.1; − 0.3; − 0.15 − 0.2

Reverse Scan: Hump @ − 0.5; Peak @ -0.9; Hump @-1.05

− 0.6; − 0.8; − 0.9

Forward Scan: Peak @ 0.4; Hump @ -1.05; Hump @ -0.6; Hump @ 0.49 Hump @ 0.65

0.3; − 0.3; − 0.7; 0.1 − 0.2

Reverse Scan: Hump @ − 0.6; Hump @-1.05

− 1.0; − 1.1

Forward Scan: Peak @ 0.49; Hump @ 0.8; Hump @ 1.0; Hump @ 0.5 Hump @ 1.05

0.4; 0.05; − 0.3; 0.2 − 0.3

Reverse Scan: Hump @ − 0.7; Hump @-1.05

− 0.4; − 1.1

Forward Scan: Peak @ 0.65; Hump @ 0.8; Hump @ 1.05

0.45; 0.25; − 0.10

Reverse Scan: Hump @ − 0.71; Hump @−1.01

− 1.15; − 0.9

Forward Scan: Small Peak @ -0.75; Hump @ −1.00; Peak @ 0.6; Hump @ 0.8; Hump @ 1.05

0.5; 0.2; 0.2; −1.3; −1.35

Reverse Scan: Small Peak @ − 0.79; Hump @-1.05

− 0.8; −1.25

Forward Scan: Peak @ 0.6; Hump @ 0.9; Hump @ -0.75;

0.43; 0.22; − 0.3

2

3

4

5

6

7

8

9

5

10

25

50

75

100

125

150

(continued)

436

S. Harinipriya and K. A. Palicha

Table 5 (continued) Sl. no

10

11

SR

175

200

Voltage (V)

Current (mA)

Reverse Scan: Hump @ 0.99; Hump @-0.5

−1.3; −1.1

Forward Scan: Peak @ 0.65; Hump @ -1.01; Hump @ 0.92

0.6; −0.8; 0.5

Reverse Scan: Hump @ 0.89; Peak @ -0.9; Hump @-1.15

−0.1; −1.3; −1.4

Forward Scan: Peak @ 0.75; Hump @ −1.05; Hump @ −0.75; Hump @ 0.98

0.7; −0.85; −0.75; 0.4

Reverse Scan: Hump @ 0.8; Hump @ −0.75; Hump @-1.15

−0.2; −1.35; −1.45

2.1 Aspirin Hydrolysis into Salicylic Acid and Acetic Acid

The peak in the forward scan at 0.65 V is due to the formation of salicylic acid by hydrolysis of acetyl salicylic acid tagged with ZnO and drop casted on Pt electrode.

2.2 Atorvastatin Oxidation The two small humps noticed at 0.8 and 1.05 V in the forward scan of the cyclic voltammogram of Ecospirin denotes the oxidation via removal of two electrons from atorvastatin with subsequent hydrolysis leading to the formation of 5(4-Fluorophenyl)-2-isopropyl-4-phenyl-1H-pyrrole-3-carboxylic) acid at 0.8 V and 3,5-dihydroxy-7-oxo-heptanoic acid at 1.05 V.

ZnO Nanoparticles Tagged Drug Delivery System

437

2.3 Clopidogrel Oxidation

The oxidation peak observed at 1.05 V also accounts for the hydrolysis of clopidogrel base into acetaldehyde and Methyl (+)-(S)-α-(o-chlorophenyl)-6,7dihydrothieno[3,2-c] pyridine-5(4H)-ol.

438

S. Harinipriya and K. A. Palicha

3 Cyclic Voltammetric Analysis of ZnO-Citicholine Pharmaceutical Formulation (White Crystalline Powder Inside the Capsule) From Table 6, the values of redox potential and current at 100 mV s-1 scan rate (cf. Fig. 7) agrees satisfactorily with the literature[14]. In the literature, the oxidation potential of citicholine occurs at 0.62 V on Ni–Al LDHs/CD modified Glassy Carbon Electrode (GCE) with respect to Ag/AgCl reference electrode in 1 M KOH electrolyte. In the present investigation oxidation peak is observed at 0.43 V versus SCE for ZnO tagged with Citicholine drop casted on Pt working electrode. The shift of 190 mV from the literature value may be attributed to the change in the reference electrode from Ag/AgCl (literature) to SCE (present studies). The rationale could also be due to the interaction between the drug carrier (Bulk ZnO) and the drug (Citicholine). Table 6 Comparison of the scan rate, peak potential, and peak current for Citicholine at different scan rates (SR) measured in mVs−1 Sl. no

Voltage (V)

Current (mA)

1

SR 1

Forward Scan: Peak @ − 0.1 Reverse Scan: Peak @ − 0.3

0.1 − 0.1

2

5

Forward Scan: Peak @ − 0.03 Reverse Scan: Peak @ − 0.35

0.3 − 0.1

3

10

Forward Scan: Peak @ 0.1 Reverse Scan: Peak @ 0.39

0.01 − 0.15

4

25

Forward Scan: Peak @ 0.2 Reverse Scan: Peak @ − 0.35

0.5 − 0.2

5

50

Forward Scan: Peak @ 0.27 Reverse Scan: Peak @ − 0.41

0.7 − 0.2

6

75

Forward Scan: Peak @ 0.40 Reverse Scan: Peak @ − 0.49

0.75 − 0.4

7

100

Forward Scan: Peak @ 0.42 Reverse Scan: Peak @ − 0.51

0.88 − 0.42

8

125

Forward Scan: Peak @ 0.49 Reverse Scan: Peak @ − 0.55

0.95 − 0.5

9

150

Forward Scan: Peak @ 0.5 Reverse Scan: Peak @ 0.57

1.00 − 0.58

10

175

Forward Scan: Peak @ 0.52 Reverse Scan: Peak @ − 0.61

1.10 − 0.6

11

200

Forward Scan: Peak @ 0.55 Reverse Scan: Peak @ − 0.62

1.15 − 0.65

ZnO Nanoparticles Tagged Drug Delivery System

439

Fig. 7 Cyclic Voltammogram of Citicholine (pharmaceutical composition) tagged to ZnO coated on Pt working electrode, Saturated Calomel reference electrode and Pt wire counter electrode, in 1 M KCl electrolyte at different scan rates from 1 to 200 mVs−1 in the potential window of −1.2 to 1.2 V. The pharmaceutical combination of citimet is crystalline acetyl choline

The oxidation peak observed at 0.43 V also accounts for the hydrolysis of citicholine base into cytidine, phosphoric acid and choline. The presence of choline is manifested at 0.43 V in the cyclic voltammogram of citimet tagged to ZnO on Pt electrode versus SCE. Phosphoric acid involves in Hydrogen evolution reaction on the electrode

440

S. Harinipriya and K. A. Palicha

surface, due to the dissolved H2 gas in the solution, the peak shift and peak current increase with increase in scan rate is noticed. As the hydrolysis product of citicholine is cytidine an amino acid, phosphoric acid and choline—a neurotransmitter, the intake of citicholine would increase the neural activities such as improvement in memory, cognitive effects, response times, improved brain functions in patients with mild and severe Autism. Tagging the drug with ZnO will provide localised drug delivery as well as good Zn supplement to the system to improve the immunity of the body.

4 Cyclic Voltammetric Analysis of ZnO-Citicholine Along with the Ecospirin Tablet Inside From Table 7, the values of redox potential and current at 100 mVs-1 scan rate (cf. Fig. 8) agrees satisfactorily with the literature [14] for citicholine formulation. In the present investigation oxidation peak is observed at 0.38 V versus SCE for ZnO tagged with citicholine along with ecospirin tablet inside the capsule drop casted on Pt working electrode. The shift of 240 mV from the literature value may be attributed to the change in the reference electrode from Ag/AgCl (literature) to SCE (present studies). The rationale could also be due to the interaction between the drug carrier (Bulk ZnO) and the drugs (Ecospirin & Citicholine). Although in the reverse scan, the reduction peaks of Ecospirin are noticed, it is completely suppressed in the forward scan. The rationale behind the absence of oxidation peak of any component of ecospirin in the forward scan could be attributed to the higher concentration of CITIMET in the tagged composition. The amount of Citicholine and Ecospirin taken are 500 and 75 mg (as per the pharmaceutical formulation). Diffusion coefficient of the drug in physiological conditions employing electrochemical studies and Randle – Sevcik equation (ip = 2.69*105 D1/2 v1/2 Cn3/2 , where C = 0.5 M for Citicholine and 0.075 M for Ecospirin; n = 1; v = 1 to 200 mV/s; ip is the peak current from cyclic voltametric studies in mA) resulted in the D values of 5.69*10–12 cm2 /s for ecospirin tagged on ZnO, 4.06*10–13 cm2 /s for citimet tagged on ZnO and 3.69*10–13 cm2 /s for combined drug tagged on ZnO. Thus, DCombined < DCiticholine < DEcospirin . The trend in diffusion coefficient of the drugs indicate the fact that in the case of combined ecospirin and citicholine formulation, the drug diffuses into the system more effectively due to the protonation of the heteroatoms on the cytidine and choline moiety and acidity increase in the system by formation of salicyclic acid and acetic acid by hydrolysis of aspirin in Ecospirin. This pushes the pH of the system towards the physiological pH of 5.5, hence facilitating easy oxidation, hydrolysis of the drug leading to facile adsorption of neurotransmitters such as choline by the system and assist in the treatment of patients with stroke possibilities, avoidance of recurrence of stroke, in autistic patients (patients with special needs and abilities) the ZnO tagged drug will assist in not only as neurotransmitter after hydrolysis, the drug carrier ZnO acts as the Zn supplement for the system to enhance the absorption of the choline (neurotransmitter) formed during the breakdown of the

ZnO Nanoparticles Tagged Drug Delivery System

441

Table 7 Comparison of the scan rate, peak potential, and peak current for Ecospirin and Citimet at different scan rates (SR) measured in mVs−1 Sl. no

SR

Voltage (V)

Current (mA)

1

1

Forward Scan: Peak @ − 0.2

0.15

Reverse Scan: Peak @ −0.25; Hump @ − 0.1

− 0.095; − 0.025 0.2

2

5

Forward Scan: Peak @ − 0.1 Reverse Scan: Peak @ − 0.31; Hump @ − 0.25

− 0.21; − 0.1

3

10

Forward Scan: Peak @ − 0.1

0.3

Reverse Scan: Peak @ − 0.39; Hump @ − 0.22

− 0.35; − 0.1

Forward Scan: Peak @ 0.35

0.45

4

25

Reverse Scan: Peak @ − 0.41; Hump @ − 0.37

− 0.19; − 0.09 0.65

5

50

Forward Scan: Peak @ 0.25 Reverse Scan: Peak @ − 0.49; Hump @ − 0.3

− 0.25; − 0.12

6

75

Forward Scan: Peak @ 0.32

0.7

Reverse Scan: Peak @ − 0.52; Hump @ − 0.35

− 0.35; − 0.19

Forward Scan: Peak @ 0.38

0.8

7

100

Reverse Scan: Peak @ − 0.59; Hump @ − 0.41

− 0.42; − 0.22 0.95

8

125

Forward Scan: Peak @ 0.49 Reverse Scan: Peak @ − 0.57; Hump @ − 0.54

− 0.6; − 0.54

9

150

Forward Scan: Peak @ 0.5

1.00

Reverse Scan: Peak @ − 0.68; Hump @ − 0.57

− 0.58; − 0.49

Forward Scan: Peak @ 0.52

1.10

Reverse Scan: Peak @ − 0.68; Hump @ − 0.52

− 0.6; 0.45

Forward Scan: Peak @ 0.58

1.5

Reverse Scan: Peak @ − 0.70; Hump @ − 0.55

− 0.65; − 0.51

10 11

175 200

drug in the body at pH = 5.5 in comparison with the individually prescribed and orally administered pharmaceutical formulations of Ecospirin and Citicholine.

5 In-Vitro Analysis of the Drug Tagged to ZnO on Human Blood Sample and Conclusions The observations from DDS and its effect on the blood sample is provided in Table 8. The blood samples are collected from volunteer donors with proper consent. The blood samples were collected from donor with previous stroke history and under the prescription of blood thinner and cholesterol control medicines along with neurotransmitters to assist motor controls. The test/criteria excludes samples from donors with health risk or medical condition, such as diabetes, heart risk, cholesterol, genetic disorders, interactive or abusive drugs intake history and blood diseases such as

442

S. Harinipriya and K. A. Palicha

Fig. 8 Cyclic Voltammogram of Ecospirin and Citimet (pharmaceutical composition) tagged to ZnO coated on Pt working electrode, Saturated Calomel reference electrode and Pt wire counter electrode, in 1 M KCl electrolyte at different scan rates from 1 to 200 mVs−1 in the potential window of −1.2 to 1.2 V

anaemia, leukaemia, etc. The blood probes followed the norms/policy of the Health Services of India including utility of disposable syringes, incubators to carry the blood samples and also followed the ASTM F1984-99, ASTM F2888-13 procedures. The time elapsed from the collection of the sample to the conducting of invitro analysis is approximately 25 min. The blood samples were incubated with the drugs at different concentrations from 15, 25, 62.5, 125 and 250 μg/ml of the drug in different vials for 24 h and the appropriate parameter estimations were performed.

5.1 Complete Blood Count 1. The RBC count of baseline, control and the Drug Delivery System (DDS) treated blood cells is greater than 5, well within the biological reference level of 4.5–6.5; DDS does not reduce the RBC count, hence not cytotoxic to RBCs 2. Haemoglobin reduces on treatment with DDS from 13.7 baseline value to around 12 upon treatment with the drug. This may be due to interaction of drug carrier with the Fe2+ of Haemoglobin. Although not serious, DDS treatment may lead to anaemia in long run, but curable upon proper iron intake 1. The Haematocrit test indicating the Packed Cell Volume (PCV) is around 47 for baseline and control whereas reduces to 35–40 in the case of DDS treatment. This is consistent with Haemoglobin reduction on DDS treatment. Although the

Base line

28.6

Mean corpuscular Hb concentration

5.4

0.1

Monocytes

Eosinophils

Basophils

platelet count

2,88,000

5.2

Lymphocytes

Platelets

66.6

22.8

Neutrophils

Differential count

Total leucocyte count

11,280

23.8

Mean corpuscular Hb

Leucocytes

87.7

83.3

Mean corpuscular volume

2,83,000

0.7

5.1

9.1

29.5

55.6

8500

28.4

22.9

13.1

46.2

13.7

Hb

5.3

Control

Pcv (Haematocrit) 47.8

5.75

RBC count

Completed blood count

Parameters

1,40,000

0

11.6

4.6

58.7

25.1

2230

27.3

23.9

70.2

37.2

12.6

5.27

DDS 250 μg/ml

1,73,000

0.7

8.1

8.1

43.5

39.6

3000

28.5

23.5

70.2

35.5

11.9

5.06

DDS 125 μg/ml

Table 8 Observations from DDS and its effect on the blood sample

2,70,000

1

4.6

9

25.6

59.8

6010

28.7

22.4

78.2

36.7

10.5

4.69

DDS 62.5 μg/ml

3,18,000

0.8

4.6

5.6

28.5

60.5

7130

29.7

22.7

71.7

38.5

12.2

5.37

DDS 31.25 μg/ml

2,82,000

1.2

5.2

6.5

32.6

54.5

7500

29.6

22.6

71.1

39.9

12.5

5.52

DDS 15.16 μg/ml

cells/cmm

%

%

%

%

%

cells/cmm

g/dl

pg

fL

%

g/dl

millions/ cmm

Unit

(continued)

150,000–400,000

0–1.0

1.0–6.0

2.0–10.0

20–45

40–75

4000–11,000

30–35

27–32

76–96

40–54

13.5–18

4.5–6.5

Biological ref. interval

ZnO Nanoparticles Tagged Drug Delivery System 443

10

Mean platelet volume(MPV)

14.1

1.19

31.9

5.12

Prothrombin time

PT(INR) value

Activated partial thromboplastin time

Viscosity

Prothrombin time

Blood group & Rh B + ve type

Base line

Parameters

Table 8 (continued)

4.56

25.9

0.99

11.7

NA

9

Control

3.85

27.9

1.03

12.2

NA

10.9

DDS 250 μg/ml

4.08

27

1.09

12.9

NA

10.9

DDS 125 μg/ml

4.08

25.8

1

11.8

NA

9

DDS 62.5 μg/ml

4.12

28.1

1.13

13.3

NA

10.7

DDS 31.25 μg/ml

4.63

27.2

1.03

12.1

NA

11

DDS 15.16 μg/ml

mPaS or cPa

s



ss



fL

Unit

3.85 – 5.3

25.0–43.0

0.8–1.2 with oral anticoagulant t/t: 2.0–3.0 with mechanical prosthetic vale: 2.5–3.5

9.3–14.0 s



7.2–11.7

Biological ref. interval

444 S. Harinipriya and K. A. Palicha

ZnO Nanoparticles Tagged Drug Delivery System

445

PCV has reduced after DDS treatment, it is still close to the lower limit of the biological reference interval of 40–54. So proper iron intake along with DDS would mitigate the induced Haematocrit by DDS 2. Mean Corpuscular Volume (MCV) is greater than 80 indicating good volume of RBC and non-anaemic condition. Upon treatment with DDS the MCV reduces to 71 then increases to 78 and again falls to 70, thereby making the average value to be ~72.22 this is on the lower limit of the biological reference interval of 76–96. Indicating the fact that the carrier or the drug itself is affecting the MCV. This may not be a huge immediate concern but in long run may affect the RBC in the subject. 3. Mean Corpuscular Haemoglobin (MCH) is identified to be on the lower side for baseline, control as well as on DDS treatment. DDS seem to have no great effect on MCH. Indicating that the oxygen carrying ability of the individual under consideration is already on the lower side and might cause mild headache. 4. Mean Corpuscular Haemoglobin Concentration (MCHC) is identified to be on the lower side for baseline, control as well as on DDS treatment. DDS seem to have no great effect on MCHC. Indicating that the oxygen carrying ability of the individual under consideration is already on the lower side. As DDS has no great influence on MCHC, it is evident that the drugs administered has the effect of reduction in MCHC.

5.2 Leucocytes The baseline value is higher than the biological reference range of 4000–11,000 but reduces on DDS treatment. Although the reduction in leucocytes count are evident, only at higher concentration of DDS that the range falls to below lower limit. The rationale behind this observation might be attributed to the interaction of Drug carrier on the leucocytes of the subject. In addition, the monocytes and lymphocytes seem to increase at higher concentration of DDS demonstrating the fact that the T and B cells are produced more by the subject when it encounters the DDS. This increases the possibility of increased immune response, but at the same time the risk of overproduction of immune cells in the subject. The higher percentage of eosinophils at higher dosage of DDS demonstrates the response by the subject towards the drug carrier as an antigen or foreign substance analogous to parasites or bacteria. The drug carrier seems to increase the immune response towards parasitic infections (anti-bacterial functionality).

5.3 Platelets The platelet count of the subject is absolutely the average between the lower and upper limit of the biological reference value of 1,50,000–4,00,000. The platelet decreases

446

S. Harinipriya and K. A. Palicha

with increase in DDS dosage. At very high dose of 250 μg/l of DDS the platelet count falls below the lower limit to 1,40,000. Except for very high concentration of DDS, the platelet count is within the biological reference value for other lower concentrations of DDS.

5.4 Prothrombin Time The prothrombin time which indicates the clotting time of the blood is perfectly within the biological reference value in all units. This concludes that the subject’s blood does not clot fast and hence the risk of blood clot in the artery or veins, stroke or heart attack due to blood clot is meagre.

5.5 Path Forward From the above tests, results and discussion, it is clear that ZnO tagged Citicholine can act as an efficient drug delivery system without cytotoxicity, good cell viability, good immune response, Zn supplement for the subject. Thus, ZnO tagged citicholine can be an useful neurotransmitter drug for specially abled children and adults to improve their brain function, motor skills and cognition. To demonstrate the DDS for further clinical trials the following studies need to be conducted to determine the compatibility and functionality of the DDS in (i) in-vitro MTT, TUNEL assays in human brain cell line, (ii) in-vivo testing in rabbit and rat, (iii) in human subjects (after proper approvals for clinical trials). Acknowledgements The authors thank WBR labs for carrying out the drug analysis of the blood samples provided by Mr. Kaushik A Palicha (Co-author of the paper). IIT Madras, Department of Chemistry for FTIR, UV-Vis and XRD analysis of the drug both tagged and untagged.

References 1. Rasmussen JW, Martinez E, Louka P, Wingett DG. Zinc oxide nanoparticles for selective destruction of tumor cells and potential for drug delivery applications. Expert Opinion Drug Deliv 2010 1;7(9):1063–1077 2. Zhang H, Chen B, Jiang H, Wang C, Wang H, Wang X (2011) A strategy for ZnO nanorod mediated multi-mode cancer treatment. Biomaterials 1;32(7):1906–1914 3. Al-Ajmi MF, Hussain A, Ahmed F (2016) Novel synthesis of ZnO nanoparticles and their enhanced anticancer activity: role of ZnO as a drug carrier. Ceram Int 15;42(3):4462–9. 4. Kishwar S, Asif MH, Nur O, Willander M, Larsson PO (2010) Intracellular ZnO nanorods conjugated with protoporphyrin for local mediated photochemistry and efficient treatment of single cancer cell. Nanoscale Res Lett 5:1669–1674

ZnO Nanoparticles Tagged Drug Delivery System

447

5. Xie H, Wen B, Xu H, Liu L, Guo Y (2016) Catalase immobilized ZnO nanorod with βcyclodextrin functionalization for electrochemical determination of forchlorfenuron. Int J Electrochem Sci 1;11(4):2612–2620 6. Cai X, Luo Y, Zhang W, Du D, Lin Y (2016) pH-Sensitive ZnO quantum dots–doxorubicin nanoparticles for lung cancer targeted drug delivery. ACS Appl Mater Interfaces 31;8(34):22442–52240 7. Guo D, Wu C, Jiang H, Li Q, Wang X, Chen B (2008) Synergistic cytotoxic effect of different sized ZnO nanoparticles and daunorubicin against leukemia cancer cells under UV irradiation. J Photochem Photobiol B: Biol 11;93(3):119–126 8. Muhammad F, Guo M, Guo Y, Qi W, Qu F, Sun F, Zhao H, Zhu G (2011) Acid degradable ZnO quantum dots as a platform for targeted delivery of an anticancer drug. J Mater Chem 21(35):13406–13412 9. Tripathy N, Ahmad R, Ko HA, Khang G, Hahn YB (2015) Enhanced anticancer potency using an acid-responsive ZnO-incorporated liposomal drug-delivery system. Nanoscale 7(9):4088– 4096 10. Kruanetr S, Pollard P, Fernandez C, Prabhu R (2014) Electrochemical oxidation of acetyl salicylic acid and its voltammetric sensing in real samples at a sensitive edge plane pyrolytic graphite electrode modified with graphene. Int J Electrochem Sci 9:5699–5711 11. Yilmaz B, Kaban S (2016) Electrochemical behavior of atorvastatin at glassy carbon electrode and its direct determination in pharmaceutical preparations by square wave and differential pulse voltammetry. Indian J Pharmaceut Sci 8;78(3):360–367 12. Gunache RO, Bounegru AV, Apetrei C (2021) Determination of atorvastatin with voltammetric sensors based on nanomaterials. Inventions 2021 12;6(3):57 13. Dermi¸s SA, Aydo˘gan E (2010) Electrochemical study of the antiplatelet agent clopidogrel and its determination using differential pulse voltammetry in bulk form and pharmaceutical preparations with a glassy carbon electrode. Die Pharmazie-Int J Pharmaceut Sci 1;65(3):175– 181 14. Shadlaghani A, Farzaneh M, Kinser D, Reid RC (2019) Direct electrochemical detection of glutamate, acetylcholine, choline, and adenosine using non-enzymatic electrodes. Sensors 22;19(3):447

Design of a Development Board Based on the Microcontroller ATmega328P, Including a Symmetric Low-Noise Voltage Source Valentina Bastida Montiel and Marco Gustavo S. Estrada

Abstract For this study, we designed a development board based on the microcontroller ATmega328P, which also includes a symmetric low-noise voltage source. The development board can be used as a tool to understand microcontrollers. In addition, it can also be used for signal conditioning for any electronics project that requires analog or digital conditioning, but especially for biomedical signals due to its high signal-to-noise ratio. Keywords Development board · Microcontroller · Biomedical instrumentation

1 Background Microcontrollers have been widely used in various areas of technology. Years ago, they were very useful for developing the first calculators, and nowadays they are used for all kinds of applications ranging from home appliances, to medical instruments, particularly pacemakers, monitoring equipment, and more. AVR microcontrollers specifically were developed in the ’90s. This is one example of a RISC microcontroller with a Harvard modified architecture. Its characteristics make it possible to have separate memories for the data and the program. In addition, the RISC architecture leads to a very quick execution of instructions. There are a lot of microcontrollers that belong to this family, including the ATmega2560 and the ATmega328P. The latter is the architecture we selected for this project. Development boards are meant to make the approach to hardware architectures more friendly for users. A great example of a development board that was used as a precursor for this project is the Arduino UNO [1], developed by Massimo Banzi. Física Biomédica, Facultad de Ciencias, Universidad Nacional Autónoma de México V. B. Montiel (B) · M. G. S. Estrada Universidad Nacional Autónoma de Mexico, Av. Universidad 3004, Copilco Universidad, Coyoacán, 04510 Mexico City, Mexico e-mail: [email protected] URL: https://www.fciencias.unam.mx/estudiar-en-ciencias/estudios/licenciaturas/fbiomedica © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_33

449

450

V. B. Montiel and M. G. S. Estrada

This board was designed to be fully plug and play, meaning it only needs an USB cable to power it. Nowadays this kind of hardware ranges in architecture, cost, and function. It is important to know what capabilities each one has, to better choose one that will suit the needs of the intended projects. Looking specifically at hardware architectures, there are various. For instance, some microprocessors do not have in/out ports and are composed only of an ALU, registers, and a control unit. Any element of storage and input or output devices are additional [2]. In contrast, a microcontroller has memory and input/output ports, but most importantly, it has analog-digital converters. The chosen hardware architecture for this work is a microcontroller, specifically the ATmega328P. The ATmega328P is an 8 bit microcontroller based on an AVR RISC [5]. It has a flash ISP memory of 32 kb for reading and writing, and its performance can reach 1 million instructions per second. Also, it has 6 PWM channels, 6 ADC channels with a resolution of 10 bits, programmable USART ports, serial interface SPI, and communication protocol I 2 C. This microcontroller has the option of using an external oscillator or the internal oscillator; thus, the frequency of the oscillator can range from 0.4 to 16 [MHz] [6]. Additionally, for the design of this board, the package that was used is the PU, which is the through-hole package. The main objective for this project was to design and construct a development board that contains a low-noise symmetric voltage source and a flexible hardware architecture that allows the use of A-D converters and can be used in biomedical instrumentation applications. This project intends to offer an instrumentation development tool that integrates both analog and digital blocks, so that it is suitable for doing analog conditioning and digital processing of signals with the same board. To achieve this, the process was divided into five specific objectives: 1. Designing a symmetric source of low noise with an output voltage ranging from 2 to 15 [V] that is stable and has an adequate component distribution to avoid overheating of the PCB. 2. Choosing a hardware architecture that has an internal oscillator, analog-digital channel converters, low cost, flexibility, and practicality. 3. Designing the board with a flexibility that allows for analog and digital functions that are easy to enable and disable. 4. Designing a development board that has the necessary characteristics for its use in biomedical instrumentation applications, including the ones that require biopotential acquisition. 5. Designing a board that is interactive and accessible so that it makes it possible to learn subjects such as analog and digital electronics. All these objectives will be further discussed in the results and conclusions sections.

Design of a Development Board Based on the Microcontroller …

451

1.1 Motivation This board intends to complement the capabilities that a commercial development board can offer, while adding elements that allow the analog signals conditioning through instrumentation. The board is divided into two main sections, one that has a symmetric low-noise voltage source, and another that contains the microcontroller; this will make possible the digitalization, storage, and processing of analog signals. The purpose of this development is to offer a tool that can be used for more complex instrumentation projects that require analog and/or digital conditioning.

2 Development The board was designed to use components that would grant reliability and durability, meaning that they give the greatest stability for the intended functions of the board. First, the design of the symmetric voltage source was done, which would rectify the alternating current voltage. For this purpose, the W04G diode bridge was used, which can give up to 1.5 [A] and endure a RMS voltage of 280 [V]. The rectified voltage was filtered using electrolytic capacitors of 5600 [µF] so that the ripple voltage was minimized. Then, the voltage regulators LM317 and LM337 were selected, which have variable output, given that their application is highly recommended for biopotential acquisition. The voltage output of the regulators is adjusted using additional resistors, and four tantalum capacitors were added to further filter the output signal. These capacitors are particularly recommended in the datasheet if the filter capacitor is not near the regulator and also to enhance the transitory response of the circuit. In the case of the LM317, we used two diodes for protection and to provide a path of discharge for the voltage. Additionally, the voltage regulator LM7805 was used to provide a power supply for the digital section of the board with a maximum voltage of 5 [V]. Also, we used voltage dividers so that the output voltage of the symmetric source could be digitized. There are three switches included throughout the design. One is a single-pole double-throw that has the function of interrupting the connection between the filter capacitor and the diode bridge. With this, it is possible to visualize the process of voltage rectification, whether it is the full wave rectification voltage or the ripple voltage. One of the other switches is a double-pole double-throw, whose function is to deactivate the connection of two A-D channels that would be used to digitize the output voltage of the symmetric source. The last switch is a push-button for the function of resetting the microcontroller. The interconnection of the elements will be discussed further below. A block diagram of the design is shown in Fig. 1. Figure 1 shows that the development board is divided into two main blocks, the analog one and the digital one. Adding detail, the voltage adjusting block has two

452

V. B. Montiel and M. G. S. Estrada

Fig. 1 Block diagram of the development board

potentiometers, each one used for adjusting one output of the symmetric voltage source. Also, the input or output blocks consist of a tblock so that the voltage can be measured or used. Finally, the interactive pins consist of headers so that the user can easily interact with the hardware architecture or with the interactive functions of the board.

2.1 Analog Block The analog block contains the variable symmetric voltage source, which is adjustable from ±1.9 to ±19.9 [V], and the fixed voltage output of 5 [V], which is necessary for the digital block. The main parts of this block are the alternating input, rectification switch, voltage regulators, and voltage dividers. For the alternating input, the connection is directly from the alternating voltage to the board via a central tapped transformer. For the correct functioning of the board, the characteristics of the transformer must be 24 [V] with 1 or 2 [A]. A tblock of three inputs is directly connected to the diode bridge. The output of the bridge is filtered using the electrolytic capacitors mentioned above. This can be seen in Fig. 2. Continuing with the rectification switch, this was designed so that the board could offer an interactive alternative so that it would be possible to understand and visualize the process of alternating voltage rectification. This part includes the diode bridge with its rectification pins, the rectification switch, and the electrolytic capacitors of 5600 [µF]. In Fig. 3, we can see the connection made between the single-pole double-throw switch that interrupts the connection and the capacitor C1. Therefore, when the capacitor is disconnected, the rectified voltage can be seen in the rectification pins, and when it is connected, the ripple voltage can be visualized.

Design of a Development Board Based on the Microcontroller …

453

Fig. 2 Alternating input circuit

Fig. 3 Rectification switch circuit

Then there are the regulators of positive voltage, which were selected for their biomedical applications. The circuit is observed in Fig. 4. Each input of the regulators was connected to the positive rectified output. The circuit of LM317 was based on the one recommended in its datasheet. This voltage regulator can give up to 1.5 [A] and as low as 0.01 [A]. The output can be regulated between 1.25 and 37 [V]. The resistor R1 allows the regulator output to have a minimum resistance value to avoid a short circuit, which would lead to a malfunction of the board. The regulator LM7805 is used to provide a voltage source for the digital part of the development board. This regulator can give up to 1.5 [A] as an output current and endure an input voltage of 28 [V] maximum.

454

V. B. Montiel and M. G. S. Estrada

Fig. 4 Schematic circuit of voltage regulators LM317 and LM7805

Fig. 5 Schematic circuit of LM337 voltage regulator

The negative voltage regulator for this board is the LM337, which gives up to 1.5 [A] as an output current and a minimum of 0.01 [A]. The voltage can be adjusted between −1.25 and −37 [V], depending on the input source. The circuit shown in Fig. 5 is as recommended on the datasheet. Here, there was a resistor added for the same purpose as in LM317. Given that one of the goals of the board is to display the voltage output on a LCD display, the conditioning needed to accomplish this was made through two voltage dividers. In that way, the dividers ensure that the output voltage for digitalization is 5 [V] maximum. Figure 6 shows the schematic of this particular connection. The voltage dividers were made using 10 and 4.12 [k] resistors. The positive voltage divider output was connected to one of the entrances of the double-pole double-throw

Design of a Development Board Based on the Microcontroller …

455

Fig. 6 Voltage dividers for digitalization

Fig. 7 Negative voltage conditioning

switch. The other entrance of the same pole goes to the A-D channel A1 from the microcontroller that is connected to an interactive pin with the tag PC1. On the other hand, given that the negative voltage divider output has a negative voltage, it has additional conditioning, which means that its reference needs to be offset so that the digitalization can be made. This is shown in Fig. 7. On one terminal of the voltage divider, there is the negative regulated output, and on the other one, there is the 5 [V] output voltage.

2.2 Digital Block This block contains the hardware architecture, ATmega328P-PU, the interactive pins for the LCD display, the I/O channels of the board, and the reset button for the microcontroller. Additionally, we describe in this section the programmer needed for the board. To display the symmetric source output voltage, we used two A-D channels that can be enabled or disabled to be used for other functions and six digital channels. Figure 8 shows the hardware architecture, along with its power source. We can note that the analog reference is connected to 5 [V]. We can also see the connections for the A-D enabler button. Something to note here is that there is no additional crystal connected. This implies that the microcontroller works with the internal oscillator of 8 [MHz], which also gives the possibility of using the channels PB6 and PB7 where the crystal would usually be connected as digital channels.

456

V. B. Montiel and M. G. S. Estrada

Fig. 8 Hardware architecture ATmega328P-PU schematic

In terms of the reset button, it is a push-button that is connected to the ground. This button aids in the process of microcontroller programming and makes it possible to reset the microcontroller. This button should only be pressed when one of these two scenarios is desired. The reset button goes on channel PC6, according to the information given in the datasheet. The pin connection made for the display to work is based on an example code that can be consulted in [4]. The design looks to simplify the display connection for user-friendliness. Given that Fig. 9 shows the schematic, it is important to note that these connections are already made on the PCB, so it is only necessary to connect the display, the way it is shown in Fig. 10. The architecture has various I/O ports. The interactive pins are connected directly to these ports so that there can be interaction with the microcontroller. This way, the board makes it possible to implement any function required, which can be seen in Fig. 11. The specific functions of these pins can be checked on the reference [6]. Moving on to the programmer, the most important part about choosing this architecture is the various functions it can offer, so interacting with the architecture is very important. For the purpose of programming the microcontroller, the serial USB to TTL converter CP2102 was chosen. Through the pins PD0 and PD1 on CP2102, it can receive and transmit data. More specifically, this allows a program to charge on the microcontroller memory so that it can be executed. Also, this allows the use of serial plotters if needed.

Design of a Development Board Based on the Microcontroller …

Fig. 9 Display LCD connection with interactive pins

Fig. 10 Display LCD connection with the development board

457

458

V. B. Montiel and M. G. S. Estrada

Fig. 11 Microcontroller and input-output pins

3 Results The final design of the board can be seen in Fig. 12. The spatial arrangement of the elements allows the heat to dissipate in an efficient way. In this figure, the different elements can be identified with their particular tags. To check the noise and stability, we carried out tests. For instance, many measures of voltage were taken at the moment the board was energized, and we quantified the time the voltage took to get to a stable value. For this, a charge of 0.15 [A] was connected to the source output, establishing first a voltage of 12.10 [V]. From here, we obtained the values in Table 1. The time of discharge when there was a 0.15 [A] charge connected was also measured, giving the values of the right column in the table. Then, for obtaining the SNR of the source output, a wave generation was used, having an original sinusoidal wave of 248 [Hz] and a 0.6 [V pp ] amplitude. The signal was amplified using an operational amplifier TL084, with a non-inverting

Design of a Development Board Based on the Microcontroller …

459

Fig. 12 Development board Maxima Table 1 Measures of voltage stabilization

Voltage stabilization Measure number 1 2 3 4 5 6 7 8 9 10

Stabilization time (s) 25.32 12.06 14.37 20.87 17.95 13.65 13.56 16.87 16.77 11.17

Discharge time (s) 20.77 22.50 21.19 22.51 21.03 21.29 19.47 21.62 21.22 19.61

460

V. B. Montiel and M. G. S. Estrada

Table 2 Measures of noise comparing an amplified and a non-amplified signal

Signal and noise Measure number 1 2 3 4 5

Noise [mV] 10 50 50 40 50

configuration. The amplifier was energized using the output of the source from the development board. Then both signals were compared using an oscilloscope, and the noise generated by the source was measured. This process generated the data in Table 2. The parameters of the development board are the following for the analog block. • Symmetric voltage source: – Voltage output: ±1.9 to ±19.9 [V] with 1.5 [A]. • • • •

Fixed source of +5[V] with 1.5 [A]. Interruptor to switch on. A-D channels button enabler. Brightness regulator for LCD. The parameters for the digital block are the following.

• ATmega328P microcontroller: – – – –

Operating voltage: 5 [V]. Operating frequency: 8 [MHz]. Six A-D converter channel of 10 bit resolution. Sixteen digital channels.

• Sixteen pins for display connection. • 28 interactive channels for the microcontroller. • Reset button.

3.1 Software The Arduino IDE was modified so that it could work with the ATmega328P, operating at a frequency of 8 [MHz]. For more information, see the users guide available in reference [3], where there are instructions for making the corresponding changes.

Design of a Development Board Based on the Microcontroller …

461

3.2 Biomedical Application The board was tested by designing a biopotential acquisition system for a ECG, which can be used to obtain standard, augmented, or precordial derivations. The circuit consisted of an instrumentation amplifier with a gain of 20 [dB] and a bandpass filter of fourth order with cutoff frequencies of f min = 0.15 [Hz] and f max = 247 [Hz]. It also has a Notch filter for the frequency of 60 [Hz] and a final amplification of 40 [dB]. This circuit requires a source of ±12 [V], so the symmetrical source of the development board was used. The stabilization time was measured using a counter displayed on the LCD and the output signal was observed on an oscilloscope. This way we observed an amplified signal free of noise. The fact that the biomedical application did not show a signal polluted by powerline interference confirms this board is suitable for biomedical instrumentation projects. Furthermore, the ECG signal could be digitized, using the microcontroller.

4 Discussion The board was designed with the correct element arrangement such that the circuit traces would not get overheated with a medium to intense use. The board was tested using different analog or digital circuits for continuous periods of up to 12 h without the board presenting complications or malfunctioning. The chosen hardware architecture functions well and reliably. In addition, if for any reason a higher frequency for the internal oscillator is desired, this can be done by adding the corresponding elements and changing the bootloader. Given the results shown in Sect. 3 for the stability and the noise, the average and standard deviation for the voltage stabilization time were 16.26 ± 4.32 [s]. The final voltage where the value is established will be approximately 99.67% of the initial voltage, giving a 0.33% average error. This change is associated with the heating of the components in the board, so this time is considered the response time of the voltage source. For the SNR, the RMS voltage of the measures in Table 2 was calculated giving an average SNR value of 24.68 ± 6.09 [dB]. The noise variations measured are associated with cables used for the experimental montage. The interactive function of rectification voltage visualization was successfully tested, and the rectified voltage or the ripple voltage can be correctly distinguished. About the software modifications, there were some changes made to the Arduino IDE so that it allows the ATmega328P to work at 8 [MHz]. For further understanding of this, we recommend reference [3].

462

V. B. Montiel and M. G. S. Estrada

5 Conclusions We successfully designed a development board that meets the characteristics as outlined in our objectives. Also, we added some functions to make the board more user-friendly. The board allows for working with analog and digital signals, which is why it is a very useful tool for electric biopotentials, although its reach includes any type of electronics project that functions within the parameters of the board. Also, it is important to note that the digital and analog functions can be used simultaneously or independently. This tool can also be integrated into formative courses on electronics, because it makes possible the understanding of voltage rectification and its visualization without risk for the user. On the other hand, its functions allow the development of more complex projects in electronics, and given its dimensions, it will be useful in applications where space optimization is essential. The board will make it easy to learn and closely work with hardware architectures, A-D conversion, and digital magnitudes. The symmetric source is safe and stable and has a SNR sufficiently big so that biomedical instrumentation will work and will not suffer from power-line complications. The ATmega328P, for its flexibility, cost, and reliability, is the hardware architecture more adaptable to this project’s vision, and given it is a known architecture, it makes it easier to learn for users who are beginning to work on digital electronics. Acknowledgements We would like to acknowledge the precious help of Ph.D. Erin Christy McKiernan, thank you very much for your constructive comments and for giving us some of your space and time to aid us get better results while writing this paper.

References 1. Waclawek J (2022) The unofficial history of 8051. http://www.efton.sk/t0t1/history8051.pdf. Accessed 10 Sept 2022 2. Morris M, Arquitectura de computadoras, (3rd ed). Pearson P. H 3. Valentina B, Marco S (2021) Guía de usuario de tarjeta de desarrollo Máxima (users guide in spanish). https://drive.google.com/file/d/1SopWiGRX6gXcD63ow_u7nvnF54sRfZjS/view? usp=sharing. Accessed Oct 2021 4. Valentina B, Marco S (2022) Cödigo lectura de voltaje. https://drive.google.com/file/d/ 12vhJMIMMeKXMfwxSQbX7AqtJ13ZRuxFT/view?usp=sharing. Accessed Jul 2022 5. ATmega328P. https://www.microchip.com/en-us/product/ATmega328p. Accessed 8 Sept 2022 6. ATmega328P (2022) 8-bit AVR microcontroller with 32K bytes in-system programmable flash: datasheet. https://ww1.microchip.com/downloads/en/DeviceDoc/Atmel7810-Automotive-Microcontrollers-ATmega328P_Datasheet.pdf. Accessed 10 Sept 2022 7. Valentina B, Marco S (2022) Diseño e instrumentación de una tarjeta de desarrollo basada en el microcontrolador ATmega328P integrando una fuente de voltaje simétrica. https://drive. google.com/file/d/1-7EratYVyeQpvGElffzUZJRwctVpU41U/view?usp=sharing. Accessed 13 Jun 2022

Performance Assessment of N+ SiGe-Based Dielectrically Modulated Vertical Tunnel Field-Effect Transistors (DM-VTFET) for Lower Power Biomedical Application Shailendra Singh, Ankit Jain, and Balwinder Raj

Abstract This paper is the first to look at label-free biosensors using vertically dielectrically modulated tunnel field-effect transistors (DM-VTFET) by opting different dielectric constants and designing gate work function. The performance of the vertical biosensor is improved by adding an n+ pocket and gate to source overlap to the conventional lateral design. When vertical and lateral tunneling are coupled in VB, ON-state current is increased and subthreshold swing is decreased. In nanogap cavities to test the sensors’ sensing capabilities, the charged biomolecules and neutral biomolecules are fixed individually. Various electrical parameters like different dielectric constants, Silicon–Germanium mole percentage, dimensions of n+ pocket including the sensitivity variation have been varied to analyze the impact of the charged biomolecules and different dielectric constant biomolecules on the device. Keywords Band-to-band tunneling (BTBT) · N+ pocket · Vertically dielectrically modulated tunnel field-effect transistors (DM-VTFET) · Biosensor · Charged and neutral molecules

1 Introduction The label-free detection procedure made possible by nanoscale device biosensors is particularly well-liked, and they are also compatible with CMOS technology [1–4]. There are several significant difficulties with FET-based biosensors like steep SS should be less than 60 mV/decade because of kt/q limitations and lengthy reaction. S. Singh (B) · A. Jain ECE, Pranveer Singh Institute of Technology, Kanpur, India e-mail: [email protected] S. Singh · B. Raj ECE, NITTTR Chandigarh, Chandigarh, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_34

463

464

S. Singh et al.

Tunnel FET (TFET)-based biosensors can eliminate this since they have SS. Based on this, TFET-based biosensors have become a viable alternative to FET-based biosensors since they have superior sensitivity [5–7] and reaction time [8, 9]. Immobilized charged neutral (k) and charged (qf) biomolecules are used to measure the sensitivity of biosensors in nanogap cavities that are either made in the gate insulator or the gate metal [10, 11]. For this, a different value of k in the device cavity is used to evaluate electrical characteristics. For enhancing the ON state current make engineering has been done in the field of hetero material, work-function engineering and doping concentration [4, 12, 13]. The vertical structure supports the fine scaling to the device to increase the density with same amount of area during fabrication. For heterojunction, some lower band gap material was introduced in order to increase the tunneling rate. In vertical TFET, twodirection tunneling takes place, lateral (point tunneling) and vertical (line tunneling) [14–16], and as a result, it performs better than a lateral TFET in terms of ION, SS, ION/IOFF, and Vth. Numerous studies have indicated that dielectrically modulated TFET (DM-TFET) is a viable candidate for a biosensor in recent literature [17–19]. In order to actualize the sensitivity with the change in ambipolar conduction, the influence of the gateon-drain overlap is taken into consideration in nanogap cavity [20]. It is shown in [21] those structural modifications may improve biosensor performance. Short-gate DMTFETs gain more sensitivity than DMTFETs in terms of variance in ON current. The underlap idea is used to compare DM-TFETs, and it is shown in [22] that the dual metal short-gate biosensor offers better sensitivity than the single metal short-gate biosensor (sensitivity is examined in terms of SS and change in ION ). This happens due to the presence of n+ pocket layer at the source junction. The position of the n+ pocket is placed near to source channel junction. Two PN junctions that simultaneously flow in two directions are produced in order to determine the result. One P–N junction is formed at source–channel junction, another is at vertical direction of the source oxide region [23, 24]. As a result, the ON current will rise and have a stepper SS slope value. The gate overlapping over the source will also improve the ON current because of good electrostatic control over the channel. The deviceeffective parameter, ON/OFF current ratio, will be helpful in finding the sensitivity outcome for the lateral and the vertical DM-TFET structures. The next section will discuss about the different simulation techniques and about the result and discussion.

2 Simulation Models and Device Structure The cross-dimensional view of vertically dielectrically modulated tunnel field-effect transistors (DM-VTFET) is shown in Fig. 1 respectively. The device channel length is kept to be 50 nm with 30 nm each for source and drain side. Body thickness of the device is kept to be 10 nm. However, one pocket-doped SiGe material is kept at the source-channel junction to improve the band to band tunneling rate. To validate the simulation results, one calibration of VTFET is done with conventional VTFET

Performance Assessment of N+ SiGe-Based Dielectrically Modulated …

465

[24], which is shown in Fig. 2a. A difference of the graph is being shown, with presence of the SiGe material at source–channel junction. From Fig. 2b, it shows a huge enhancement in the drain drive current after adding the SiGe material by changing its mole fraction value. A variation of the SiGe mole fraction is also shown in the graph for optimization of the device. Also, to optimize the device drive current, SiGe material is introduced in between to increase the tunneling rate as the band gap of Ge is 0.7 and Si is 1.1 eV. If more of the germanium content will increase by increasing the mole fraction then device switching speed will be faster. The device uses 0.5 germanium ratio, to avoid the device fabrication issue and lattice mismatch problem. The source, channel, and drain are uniformly doped with the value of 5 × 1019 , cm−3 1 × 1016 cm−3 , and 1 × 1018 cm−3 , respectively. The pocket layer of SiGe layer is kept similar to the source doping concentration. Triple metal gate is used for better static control over the channel. The values of the work function are presumed to be 4.16 eV, 4.35 eV and 4.16 eV respectively. Silvaco TCAD tool is used to simulate both DMTFETs [26]. For TFET devices, a nonlocal BTBT mechanism is followed, which measures the tunneling rates that occur at the junctions. In addition, minority carrier recombination is modeled using the Shockley Read Hall (SRH) and Auger models. The simulation also includes models for field-dependent mobility, quantum confinement, lower band gap, and Fermi–Dirac statistic. The different dielectric constant is also considered for improving drain drive current. With all these different materials named, SiO2 , Si3 N4, Al2 O3 , and HfO2 ,

Fig. 1 Cross-dimensional view of proposed SiGe DM-VTFET with dual cavity of 10 nm

466

S. Singh et al.

Fig. 2 a Calibration VTFET structure with conventional VTFET [24]. b Drain current variation of different SiGe mole fractions with respect to the gate voltage

many electrical parameters varied like surface potential, electric field, hole, and energy band diagram are shown in Figs. 3a, b and 4a, b, respectively. Among all these characteristics’ outcome, one can easily determine that for high -k values, the performance of the device will be increased in all aspects. In Fig. 5a, the drain current will increase linearly with the increase in the (dielectric constant) value due to a rise in the device capacitance. Due to the tunneling process

Performance Assessment of N+ SiGe-Based Dielectrically Modulated …

467

Fig. 3 a Drain current variation for different dielectric constants. b Energy band diagram variation different dielectric constants of proposed device

in TFET occurring at the source/channel junction, the cavity is produced at the source end of lateral biosensor and vertical biosensor. The fluctuation of the dielectric constant and charge density is used to assess the sensitivity of both devices. Neutrally charged biomolecules, such as uricase, streptavidin, protein, APTES, and others, exhibit fixed permittivity (k > 1) of target species, however, for air, k = 1 will be

468

S. Singh et al.

Fig. 4 a Electric field variation for different dielectric constants. b Surface potential variation different dielectric constants of proposed device

Performance Assessment of N+ SiGe-Based Dielectrically Modulated …

469

Fig. 5 Drain current variation for different k values of proposed device DM-VTFET

considered. Electrical charges either positive or negative have fixed values like DNA. If the cavity is appropriately filled with said biomolecules and trapped with similar kinds, then the proposed device will have ease in finding its types. It should be noted that the drain current is better for the vertical biosensor than that of lateral one for the dielectric value of k = 4.

3 Results and Discussion This part will discuss about the positive and negative charged biomolecules for different cavity lengths. Two major cavity lengths are discussed in this article to validate our results, one is 10 nm and other is of 26 nm. From Fig. 6, one can clearly predict that with an increase in the cavity length, the drain current decreases due to loosen control over the channel. With the variation in the cavity length, the least length chosen is the high drain current because it gets more depletion from the predefined gate oxide material, however, going to the cavity length, it will increase the sensing capability. As shown in the above Fig. 6, for cavity length 2 nm, maximum high drain current can be achieved. In Fig. 7a, the 10 nm cavity length has been taken to vary the negative change biomolecules. From the figure, it can be observed that by increasing the negative charge biomolecule values, overall drain current of the device decreases, or the performance decreases. Similarly in Fig. 7b, for the same cavity length and positive

470

S. Singh et al. 10-6

Drain Current, Ids (A/µm) (Log)

10-7 10-8 10-9 10-10 10-11 10-12 10-13 10-14

Cavity length = 2 nm Cavity length = 4 nm Cavity length = 6 nm Cavity length = 8 nm

10-15 10-16 10-17 10-18 0.0

0.2

0.4

0.6

0.8

1.0

Gate Voltage Vgs (V) Fig. 6 Drain current variation for different cavity length values of proposed device DM-VTFET

charge biomolecules, drain current response shows the proportional relation to the rise in response. The drain current is shown to be raised with the raise in positive charge biomolecules. This happens due to the high depletion rate generated with high values of positive biomolecules. Now Fig. 8a and b shows that the positive and negative biomolecules have been varied with the cavity thickness of 26 nm, which has been introduced to increase the sensitivity of grasping the biomolecules. It shows the similar graph as that of the 10 nm cavity; however, the drain current is comparatively decreasing due to lost control over the tunneling region, which is used to deplete the device.

Performance Assessment of N+ SiGe-Based Dielectrically Modulated …

471

10-5

Drain Current, Ids (A/µm) (Log)

10-6 10-7 10-8 10-9 10-10

Negative Charge Biomolecules

10-11 10-12

Cavity length = 10 nm

10-13

0 qf =-1e7 qf =-1e9 qf =-5e11 qf =-1e12

-14

10

10-15 10-16 10-17 10-18 0.0

0.2

0.4

0.6

0.8

1.0

Gate Voltage Vgs (V) 10-4

Drain Current, Ids (A/µm) (Log)

10-5 10-6 10-7 10-8 10-9 10-10

Positive Charge Biomolecules

10-11

Cavity length = 10 nm

10-12 -13

10

0 qf=1e7 qf=1e9 qf=5e11 qf=1e12

10-14 10-15 10-16 10-17 10-18 0.0

0.2

0.4

0.6

0.8

1.0

Gate Voltage Vgs (V) Fig. 7 a Drain current characteristics of negative charge biomolecules for cavity length = 10 nm. b Drain current characteristics of Positive charge biomolecules for cavity length = 10 nm

472

S. Singh et al. 10-6

Negative Charge Biomolecules

Drain Current, Ids (A/µm) (Log)

10-7 10-8 10-9 10-10 10-11 10-12

Cavity length = 26 nm

10-13

qf=0 qf=-1e7 qf=-1e9 qf=-5e11 qf=-1e12

10-14 10-15 10-16 10-17 10-18 0.0

0.2

0.4

0.6

0.8

1.0

Gate Voltage Vgs (V) 10-5

Positive Charge Biomolecules

Drain Current, Ids (A/µm) (Log)

10-6 10-7 10-8 10-9 10-10 10-11

Cavity length = 26 nm

10-12 -13

10

qf=0 qf=1e7 qf=1e9 qf=5e11 qf=1e12

10-14 10-15 10-16 10-17 10-18 0.0

0.2

0.4

0.6

0.8

1.0

Gate Voltage Vgs (V) Fig. 8 a Drain current characteristics of negative charge biomolecules for cavity length = 10 nm. b Drain current characteristics of positive charge biomolecules for cavity length = 26 nm

4 Conclusion In this research, dielectric modulated vertical tunnel FET is investigated and compared, and it is shown that the VB exhibits a considerable increase in sensitivity. The threshold voltage shift is evaluated together with pocket length and thickness,

Performance Assessment of N+ SiGe-Based Dielectrically Modulated …

473

and it is shown that for better variety in threshold shift, a greater pocket length and thickness are recommended. The introduced n+ pocket dimension is also played an important role in the device as by varying the thickness and the length of pocket, results in the greater measurement to the sensitivity. Along with this, the effect of variations in the various gate oxides is shown, and it is discovered that HfO2 is found to be the highest sensitive than others. Measures of cavity length for positive and negative charge biomolecules have been discussed to evaluate their electrical parameter optimization. Additionally, we also get to acknowledge that the device is less influenceable with positive charge biomolecules. Acknowledgements We would like to thank Dr. Balwinder Raj, NIT Jalandhar for providing lab facilities and research environment to carry out this work.

References 1. Singh S, Sharma A, Kumar V, Umar P, Rao AK, Singh AK (2021) Investigation of N+ SiGe juntionless vertical TFET with gate stack for gas sensing application. Appl Phys A 127(9):726 2. Choi WY, Park BG, Lee JD, Liu TJK (2007) Tunneling field-effect transistors (TFETs) with subthreshold swing (SS) less than 60 mV/dec. IEEE Electron Devices Lett 28(8):743–745 3. Jiang Z, Zhuang Y, Li C, Wang P, Liu Y (2016) Vertical-dual-source tunnel FETs with steeper subthreshold swing. J Semicond 37(9):094003 4. Brocard S, Pala MG, Esseni D (2013) Design options for hetero-junction tunnel FETs with high on current and steep sub-threshold voltage slope. In: 2013 IEEE international electron devices meeting. IEEE, pp. 4–5 5. Wang PY, Tsui BY (2015) Band engineering to improve average subthreshold swing by suppressing low electric field band-to-band tunneling with epitaxial tunnel layer tunnel FET structure. IEEE Trans Nanotechnol 15(1):74–79 6. Dubey PK, Kaushik BK (2017) T-shaped III-V heterojunction tunneling field-effect transistor. IEEE Trans Electron Devices 64(8):3120–3125 7. Chen F, Ilatikhameneh H, Tan Y, Klimeck G, Rahman R (2018) Switching mechanism and the scalability of vertical-TFETs. IEEE Trans Electron Devices 65(7):3065–3068 8. Singh S, Raj B (2020) Modeling and simulation analysis of SiGe heterojunction double gate vertical t-shaped tunnel FET. Superlattices Microstruct 142:106496 9. Kumar S, Raj B (2015) Compact channel potential analytical modeling of DG-TFET based on evanescent-mode approach. J Comput Electron 14:820–827 10. Singh S, Raj B (2020) Analytical modeling and simulation analysis of T-shaped III-V heterojunction vertical T-FET. Superlattices Microstruct 147:106717 11. Khatami Y, Banerjee K (2009) Steep subthreshold slope n-and p-type tunnel-FET devices for low-power and energy-efficient digital circuits. IEEE Trans Electron Devices 56(11):2752– 2761 12. Lee H, Park JD, Shin C (2016) Study of random variation in germanium-source vertical tunnel FET. IEEE Trans Electron Devices 63(5):1827–1834 13. Singh S, Raj B, Raj B (2021) Vertical T-shaped heterojunction tunnel field-effect transistor for low power security systems. In: Nanoelectronic devices for hardware and software security. CRC Press, pp 61–83 14. Bhuwalka KK, Schulze J, Eisele I (2005) Scaling the vertical tunnel FET with tunnel bandgap modulation and gate work function engineering. IEEE Trans Electron Devices 52(5):909–917

474

S. Singh et al.

15. Kamata Y, Kamimuta Y, Ino T, Nishiyama A (2005) Direct comparison of ZrO2 and HfO2 on Ge substrate in terms of the realization of ultrathin high-κ gate stacks. Jpn J Appl Phys 44(4S):2323 16. Bhuwalka KK, Sedlmaier S, Ludsteck AK, Tolksdorf C, Schulze J, Eisele I (2004) Vertical tunnel field-effect transistor. IEEE Trans Electron Devices 51(2):279–282 17. Wang W, Wang PF, Zhang CM, Lin X, Liu XY, Sun QQ, Zhou P, Zhang DW (2014) Design of U-shape channel tunnel FETs with SiGe source regions. IEEE Trans Electron Devices 61(1):193–197 18. Sentaurus User’s Manual, Synopsys 09 (2017) 19. ATLAS Device Simulation Software (2016) Silvaco Int Santa Clara 20. Nigam K, Kondekar P, Sharma D (2016) Approach for ambipolar behaviour suppression in tunnel FET by workfunction engineering. Micro Nano Lett 11(8):460–464 21. Sant S, Schenk A (2015) Band-offset engineering for GeSn-SiGeSn hetero tunnel FETs and the role of strain. IEEE J Electron Devices Soc 3(3):164–175 22. Kumar S, Raj B (2016) Handbook of research on computational simulation and modeling in engineering. IGI Glob 640 23. Singh S, Raj B (2020) Two-dimensional analytical modeling of the surface potential and drain current of a double-gate vertical t-shaped tunnel field-effect transistor. J Comput Electron 19(3):1154–1163 24. Singh S, Raj B (2019) Design and analysis of a heterojunction vertical t-shaped tunnel field effect transistor. J Electron Mater 48:6253–6260

Information Field Experimental Test in the Human Realm: An Approach Using Faraday Shielding, Physical Distance and Autonomic Balance Multiple Measurements Erico Azevedo, José Pissolato Filho, and Wanderley Luiz Tavares

Abstract This paper presents the results from an original research about information field experimental tests in the human realm, i.e., an empirical test of nonlocal communication phenomenon with human beings. Faraday shielding and a 300 meters distance were used in an experiment with multiple simultaneous measurements to capture Autonomic Nervous System (ANS) balance reactions: Pulse Rate (PR), Oxygen Saturation (SpO2 ), R–R intervals and GDV (Gas Discharge Visualization) variables. Results confirmed with p < 0.0001 that subjects’ autonomic balance reacts differently in Experimental and Control groups, in spite of being inside a Faraday cage and at a large distance from their experimental peers. Authors explore a possible physical explanation using the de Broglie pilot-waves approach and discuss how to reconcile subatomic and macroscopic levels. Keywords Biophysics · Biocommunication · Pilot-waves · Nonlocality

1 Introduction Can human beings, and more generally speaking, all living things, communicate at large distances, beyond the five senses and electrochemical schemes? This question has been in the mind of great scientists since ancient times and even Democritus [1] and Aristotle [2] gave attention to it, curiously, trying to understand the origin of our night dreams. Scientifically speaking, communication is considered to be the result of our evolutionary process. According to a strict Darwinian perspective, phenomena related with E. Azevedo (B) · J. P. Filho DSEEnergy Systems Department, FEEC/Unicamp, Campinas, Brazil e-mail: [email protected] W. L. Tavares FEA/USP, Campinas, Brazil © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1_35

475

476

E. Azevedo et al.

distant communications, subtle perceptions, intuitions, sensation of being stared at and others, have been theorized as a part of this evolutionary process [3], skills that once enhanced our survival chances became untrustworthy and even misleading in modern times [4]. In biophotonics, some researchers had shown that communication between cells might be explained with electromagnetic wave schemes [5], but later, other researchers started to ask how such communication could be so precise considering significant distances and barriers, i.e., considering noise and decay. This lit up the suspicion that this kind of communication could be quantum-based [6–9]. In the human realm, several experiments have been conducted, such as Hornoton and Harper’s Ganzfeld [10], Vassiliev’s distant mental influence [11], and also distant healing [12], distant joint attention [13], potential transfer between human brains [14], and also biophysical experiments where Bandyopadhyay and others [15] identified massive quantum vibrations in the microtubules of neuron cells, showing that the current consciousness paradigm might be explained as a quantum phenomenon. Indeed, for decades and in several approaches, psychologists reported the free formation of subtle images and perceptions during clinical settings, especially while clients were telling their dreams. Last, but not least, physicists around the globe have strongly debated and extensively tested quantum entanglement, and teleportation: a quantum state “leaps” from one energetic context to another [16]. Nevertheless, for a traditional positivist, it is still hard to accept even the hypothesis of non-local phenomena with human beings [17]. This paper presents the results from an experimental design to test the so-called “spooky action at a distance” with human beings, using dreams as an information akin to establish a distant connection between two human beings that do not know each other [18]. The use of dreams can be supported by previous research [19] in which physiological signals, registered in hundreds of psychotherapy sessions, were analyzed, confirming dreams cause strong ANS reactions both in the dreamer (clients) and listener (psychologists), with unexpected correlation levels, as shown in Table 1. In spite of a series of significant statistical results [19], a clinical session is definitely not the ideal experimental environment. Therefore, an experiment was carefully designed imposing physical distance, electromagnetic shielding and multiple simultaneous measurements to test the phenomenon in a controlled way. Table 1 Physiological signal correlation between clients and psychologists in two different normalized moments: regular dialog and client telling his/her dream

Correlation Signal Standard Deviation

Regular dialog (%)

Dream telling (%)

Pulse rate

−2

−82

SpO2

68

−95

GDV energy (Joules)

−59

95

Information Field Experimental Test in the Human Realm: An Approach …

477

Table 2 ANOVA with moment as a factor of analysis for experimental and control groups Variable Z-score

Control F

Area inside

Experimental p-value

F

p-value

6,657

0,001

86,084

0,000

Intensity inside

53,775

0,000

191,829

0,000

Energy inside

88,961

0,000

403,709

0,000

Area outside

36,907

0,000

19,052

0,000

Intensity outside

14,449

0,000

184,152

0,000

Energy outside

73,831

0,000

242,141

0,000

SpO2 inside

9,450

0,000

2,778

0,062

SpO2 outside

71,417

0,000

232,673

0,000

PR inside

3,162

0,042

40,335

0,000

PR outside

34,165

0,000

13,520

0,000

R–R inside

4,792

0,008

22,704

0,000

R–R outside

12,418

0,000

6,964

0,001

Table 3 ANOVA results with the group as a factor of analysis for each moment Variable Z-score

Before F

Reading p-value

F

After p-value

F

p-value

Area inside

5,092

0,024

0,093

0,760

5,711

0,017

Area outside

11,811

0,001

9,137

0,003

0,491

0,484

Intensity inside

0,165

0,685

0,045

0,833

0,117

0,732

Intensity outside

9,290

0,002

9,913

0,002

29,226

0,000

Energy inside

1,888

0,170

0,194

0,659

1,954

0,162

Energy outside

0,383

0,536

30,902

0,000

14,848

0,000

SpO2 inside

2,810

0,094

0,786

0,375

5,501

0,019

SpO2 outside

44,460

0,000

13,653

0,000

99,313

0,000

PR inside

14,818

0,000

17,309

0,000

0,127

0,721

PR outside

6,429

0,011

20,908

0,000

4,992

0,025

R–R inside

5,696

0,017

3,568

0,059

0,302

0,583

R–R outside

7,211

0,007

26,534

0,000

5,315

0,021

Authors describe their hypothesis as a phenomenon of information transduction without displacement of energy and, as this study will show, there is empirical evidence in favor of it. In the sections dedicated to Discussion and Conclusions, a possible physical explanation and also the limits and boundaries of conclusions will be explored.

478

E. Azevedo et al.

2 Methods and Instruments 2.1 Experimental Design Experiment was performed with 50 couples, twice per couple, with an average total duration of 1 h per couple. During measurements, Subject 1 sat alone at the meeting room of the Lab 1 (LAT), while Subject 2 sat alone inside an ETS Lindgren Series 81 F cage, electromagnetically isolated, in Lab 2 (LSERF), in different buildings of the Faculty of Computer Science and Electrical Engineering (FEEC) at UNICAMP, 250 m in a straight line away, as shown in Fig. 1. For the purpose of a rigorous experimental design, three elements of Aspect’s work [20] were considered by analogy: (1) eliminating hidden variable possibilities, i.e., the ability of the emitter to give instructions to the receptor; (2) random time of interference between emitter and receptor; and (3) the use of an information akin to represent the present state of a human being. Previous evidence [19] and literature [20] supported the use of the subject’s night dreams for testing this hypothesis. Pros

Fig. 1 Distance between experiment laboratories

Information Field Experimental Test in the Human Realm: An Approach …

479

and cons of this choice, as well as other options, are discussed in the Future Work section. All subjects signed the free and informed consent form to participate in the research, which was previously approved by the University ethical committee for research with human beings. Couples were chosen among subjects living typically in different cities and who did not know each other before the experiment. From a total of 100 subjects, 40 were women and 60 were men, with an average age of 34, minimum age of 18 and maximum age of 65 years old. Subjects were oriented to write down on a piece of paper a recent dream, preferably from the night before, in order to guarantee its unprecedented character. No other information could be written. Subject 1 received instructions to read at a random time, between the 5th and 15th minute after the experiment had started, a folded paper that could contain Subject 2’s dream or an excerpt from a computer manual. In the other lab, Subject 2 knew that Subject 1 could either read his/her dream or the computer manual excerpt, and sat silently inside the ETS Lindgren Series 81 F cage. At the time freely chosen by Subject 1, a silent and mindful reading was performed, and the initial and final time of the reading were registered. After the first measurements, subjects inverted their roles, and a second measurement was performed. With this simple design, authors expected to test for statistical significance on behalf of the information transduction without energy displacement hypothesis. Other designs will be proposed in the Future Work section in order to face the challenges already identified.

2.2 Instruments Pulse rate (PR), oxygen blood saturation (SpO2 ), R–R intervals and EPI/GDV variables—Area, Intensity and Energy (Joules)—were simultaneously measured, for both subjects, as shown in Fig. 2, using respectively IMFtec-H pulse oximeters, Polar V800 sport watches and Bio-well EPI/GDV equipments connected to laptop computers in offline mode, with synchronized times in both laboratories. Pulse oximetry is a noninvasive and inexpensive method for monitoring a person’s peripheral oxygen saturation (SpO2 ). Its method is valuable for clinical use and research applications. In its most common format, pulse oximeters have a sensor device placed typically in a fingertip, through which two wavelengths of light are emitted across the body to a photodetector. Its measures indicate the changing absorbance for each wavelength, allowing to determine the absorbances due to the pulsing arterial blood, excluding venous blood, skin, bone, muscle, fat and nail polish. IMFTec-H digital

480

E. Azevedo et al.

Fig. 2 Experimental design with Faraday cage, physical distance and measurement equipment

pulse oximeters were used to achieve a continuous monitoring and storage of information regarding blood oxygen saturation (SpO2 ) and pulse rate (PR) from participants, with a sample per second, being validated for use in research.1 Regarding heart rate variability (HRV), i.e., the variation of R–R intervals, they are widely accepted indicators of ANS balance alterations. Polar V800 watches were then chosen, for a series of peer-reviewed papers confirmed them as a good instrument for research purposes involving this type of data recording.2 Subjects had to wear a chest strap and then the R–R recording function was selected and started. After each experiment, recording had to be fully stopped and, after a series of recordings, data were downloaded and labeled for further analysis. The EPI/GDV technique provides an indirect measurement of energy resources at the level of the molecular functioning of complex proteins in cells, reflecting the ANS activity and the balance of the parasympathetic and sympathetic systems [21, 22]. It consists of applying an electrical discharge, with maximum amplitude around 1 Kvolt, and pulse duration of less than 1 ms, which stimulates the emission of photons from the skin, using a digital camera that captures this discharge of light. This phenomenon occurs mainly by transporting π-electrons via quantum tunneling. In accordance with the manufacturer’s manual,3 data were collected using the titanium cylinder camera adapter: a device that allows connecting a cable with plugs to a cardioclip bracelet. The cardioclip has a metal surface at one side, to ensure its contact with the subject’s skin, and a jack at the other side, to connect the cable to the device. The quality of measurements could be checked at the beginning of each experiment through the first pictures displayed on the computer screen to which the device was attached. The Bio-Well device sample rate is one measurement, or photo, every 5 s, which proved to be sufficient for analysis. 1

https://link.springer.com/article/10.1186/s13063-016-1553-4, last consulted on 11/06/2022. https://link.springer.com/search?query=v800&searchType=publisherSearch, last consulted on 11/ 06/2022. 3 https://www.academia.edu/32732312/Bio-Well_Manual_2017, last consulted on 11/06/2022. 2

Information Field Experimental Test in the Human Realm: An Approach …

481

Bio-Well software includes a series of data processing algorithms that pick the original images, obtained with a digital camera, where each pixel intensity is represented by one byte. The final GDV output parameters can be obtained, as follows: (a) Area: this variable represents the amount of light quanta generated by a subject, which is directly related to the quantity of electrons in avalanches, ionizing air gaps. More electrons, in principle, indicate a higher level of metabolic rate in a subject; (b) Intensity: the glow intensity, in theory, contains an evaluation of the spectrum content of an image, providing information on the level of quantum activity of a subject; and (c) Energy of light, expressed in Joules, which is evaluated according to a series of principles, starting with the sensitivity of the CCD (charge-coupled device) element, given by the following formula: W E 4P × t × T 1 = = = S I s×I π × d2 × I

(1)

where W is the relative energy of a light source [Joules/cm2 ]; I is the amplitude of a signal; E is the energy of a light source, in Joules; s is the illuminated surface of a CCD element in cm2 ; P is the power of the light source in W; t means the exposure time in seconds; T is the filter coefficient; d is the diameter of illuminated area of a CCD element in cm2 ; s = π × d2 /4; W = E / s = P × t × T / s. Applying the proper values provided by the manufacturer to (1), we arrive at the following result: E(J ) = s × I × 4 × 10−8

(2)

Using (2), Bio-Well EPI/GDV provides an indirect measurement of biophotons based on electric stimulation of the skin with a high frequency impulse, generating a cloud of electrons or “corona effect”, whose image is captured with a digital camera. Last but not least, ETS Lindgren Series 81 RF was used to isolate subjects electromagnetically during the experiment. According to the manufacturer,4 this model provides an excellent RFI and EMI shielding effectiveness and a high-performance attenuation over a broad frequency range, up to 40 GHz. The importance of using a world-class Faraday cage resides in the possibility of drastically reducing the hypothesis that the phenomenon observed might be of electromagnetic nature. Faraday shielding enables us to affirm, with great probability and empirical basis, whether a new phenomenon is present or not.

4

http://www.ets-lindgren.com/products/shielding/rf-shielded-enclosures/11003/1100302, consulted on 11/06/2022.

last

482

E. Azevedo et al.

2.3 Data Normalization Before statistical analysis, a series of steps were necessary. However, three preanalysis procedures were essential: (1) timestamping of data; (2) labeling data according to the relevant moments of the experiment: Before, Dream, Control and After; (3) the addition of data columns with the Z-scored variables. Regarding procedures to label data according to the moments of the experiment, if a certain subject took T = 30 s to read a dream, the total duration of the sample was considered to be 90 s (3 T). The first 30 s were labeled “Before” (B), the next 30 s as “Reading” (R) and the last 30 s as “After” (A). On the other hand, R–R interval files generated by Polar V800 are not exported in a sequence of standard timestamps, but in a cumulative sequence of durations for each R–R interval in milliseconds. For that reason, raw data were exported from Polar V800 software and imported to Kubios HRV Standard 3.3.1 software,5 a free Heart Rate Variability analysis software, which produces outputs both in time and in frequency domains, such as SD1, SD2, RMSSD, Peak Frequencies, Stress Index, LF/HF ratio and other HRV parameters, which were exported and also summarized in graphical format for further analysis. Heart rate variability (HRV) analysis is not only a powerful tool to assess the functioning of cardiac autonomic regulation but it is also an indirect tool to evaluate the functioning and balance of the autonomic nervous system (ANS). It describes the variations between consecutive R–R intervals, also called inter-beat intervals, in which the sympathetic and the parasympathetic branches of the ANS are involved to regulate the heart rate (HR). While the sympathetic nervous system (SNS) activity increases HR and decreases HRV, parasympathetic nervous system (PNS) activity decreases HR and increases HRV [23].

2.4 Statistical Analysis In total, data from 100 measurements were analyzed, 68 from the Experimental group and 32 from the Control group. For all variables, Z-scores and Coefficients of Variation were calculated, respectively, using IBM SPSS© statistical package and Excel. This procedure made it possible to compare different subjects, in terms of Spearman, ANOVA and Scheffé tests, and also in graphical format, a strategy that proved to be very useful since a direct comparison of raw values might be, in some cases, unfruitful. Regarding GDV variables, for statistical purposes, it is important to mention that the Energy (Joules) variable considers both area and intensity in its calculations.

5

https://www.kubios.com/hrv-standard/, last consulted on 11/06/2021.

Information Field Experimental Test in the Human Realm: An Approach …

483

The following questions guided statistical analysis: 1. Are there significant differences comparing signals for subjects inside or outside the Faraday cage considering the moment as a factor: Before, Reading and After? 2. Do subjects outside the Faraday cage differ during the moment of Reading, when Experimental and Control groups are compared? 3. Do subjects inside the Faraday cage differ during the moment of Reading comparing Experimental versus Control groups? 4. Do different instruments confirm or contradict each other? In addition to statistical analysis, a series of graphs will be presented in order to represent signal behavior and the main transitions between the moments of the experiment, making statistical results more intuitively comprehensible.

3 Statistical Results Three statistical approaches were combined. The first was an ANOVA test that considered the “moment” as a factor of analysis and it answered the question: “Are there significant statistical differences between the moments Before, Reading and After when we look separately at Experimental and Control groups?” The answer to the first question was “yes” for all z-scored variables, but SpO2 of subjects outside the Faraday cage, considering the Experimental group. This result, however, did not give us a concrete clue about what was going on. Based on it, it was possible to affirm with statistical relevance that for both Experimental and Control groups all z-scored variables have significant average differences between the moments of the experiment, but nothing about the quality of these changes and how they differ between groups. In order to answer these questions, the second approach was to perform multiple comparison Scheffé tests for z-scored variables and, using this approach, it was possible to verify some coincidences, namely that whatever changed in the transition from the moment “Before” to the moment “Reading”, typically lasted after the moment of “Reading”, through the moment “After” [18]. Another approach considered was a third ANOVA test, this time with the Group as a factor of analysis in order to answer the question: “Are there significant statistical differences between Experimental and Control groups when we look separately at each Moment, Before, Reading and After?” The third approach has brought into evidence two important facts: (1) Experimental and Control groups are indeed significantly different for all z-scored variables when we compare subjects outside the Faraday cage during the moment of “Reading”, which clearly demonstrates the powerful effect of dreams in ANS balance; and (2) Groups are also significantly different for subjects inside the Faraday cage during the moment of Reading for two z-scored variables: PR Inside (p 0.0001) and R–R inside (p 0.059). Last but not least, T-Tests with z-scored variables were also performed, as shown in Table 4, confirming significant differences between Experimental and Control groups

484

E. Azevedo et al.

when considering subjects inside the Faraday cage in the “Reading” moment, the most important in experimental terms. When seen as whole, statistical results confirmed two important facts: (1) subjects outside the Faraday cage presented significant variation in their physiological signals during the moment in which they read a dream, indicating a powerful effect of dreams in ANS balance; and (2) subjects inside the Faraday cage also presented significant variation in physiological signals distinctively during the moment in which their dreams were being read, indicating that somehow their ANS balance was affected, in spite of distance and electromagnetic barriers up to 40 GHz. The first result was already expected since many scientists have shown relationships between dreams and ANS balance changes. The second result, however, brings two effective novelties. In the first place, it testifies in favor of some kind of non-local phenomenon, i.e., in favor of the information transductions without energy displacement hypothesis, since distance and Faraday shielding practically eliminated electromagnetic or conventional schemes of communication between subjects. Secondly, and maybe as important as the first consequence, the use of dreams establishes a new empirical milestone, which enables the implementation of rigorous experiments, since dreams are a matter of interest in many scientific fields. Table 4 T-Test (two-tailed) results considering the group as a factor for each moment with z-scored variables T-Test (two-tailed) experimental × control groups Variable Area inside

Moment before

Moment Reading

Moment After

T

T

t

203,966

Sig. 0,000

185,527

Sig. 0,000

Sig. 172,994

0,000

Area outside

189,604

0,000

182,937

0,000

178,258

0,000

Intensity inside

273,434

0,000

265,021

0,000

245,716

0,000

Intensity outside

213,201

0,000

212,375

0,000

209,920

0,000

Energy inside

141,929

0,000

126,289

0,000

111,140

0,000

Energy outside

116,743

0,000

112,922

0,000

114,505

0,000

SpO2 inside

7070,272

0,000

7160,125

0,000

6948,045

0,000

SpO2 outside

0,000

6544,499

0,000

1511,918

0,000

1449,343

PR inside

574,959

0,000

582,673

0,000

587,381

0,000

PR outside

554,649

0,000

518,018

0,000

514,434

0,000

R–R inside

378,341

0,000

407,123

0,000

384,319

0,000

R–R outside

415,129

0,000

403,197

0,000

190,898

0,000

Information Field Experimental Test in the Human Realm: An Approach …

485

4 Qualitative Analysis Before we jump into the Discussions section, a set of graphs will be presented comparing the Coefficients of Variation (CV) and Mean Squared Error (MSE) between groups and subjects in order to help understand the dynamic behavior of physiological signals during experiments. Pulse Rate As shown in Figs. 3 and 4, the Pulse Rate CV curves behave in a quite interesting manner for both groups. Precisely after the Before → Reading transition, indicated by a “zero”, subjects in the Experimental group presented a steady slope in opposite directions. In the Control group, curves start to distance from each other before the “zero”, but on a significantly larger proportion, which becomes clear in the Mean Squared Error of Pulse Rate CV curves. Nonetheless, the evolution of each pair of MSE curves leaves no doubt that they are distinct. Blood Oxygen Saturation (SpO2 ) Regarding blood oxygen saturation (SpO2 ), groups also present a clearly distinct behavior, as shown in Fig. 5. The SpO2 CV curves for Experimental group show a growth of around 50%, for subjects inside and outside the Faraday cage, concomitantly with the Before → Reading transition. This significant growth in the Coefficients of Variation clearly expresses the impact of dreams in ANS balance. The same curves, however, keep almost “flat” for the Control group. At a first glance, the fact that each group of subjects, Control or Experimental, behaves consistently in a similar way for the SpO2 Coefficient of Variation curves is noteworthy, regardless if subjects are inside or outside the Faraday cage, as if they had a similar breathing or emotional experience, although it is also quite clear that the

Fig. 3 Pulse rate average Coefficient of Variation (CV) showing the B → R transition

486

E. Azevedo et al.

Fig. 4 Mean Squared Error (MSE) of Pulse Rate Coefficients of Variation (CV) between groups showing the B → R transition

Fig. 5 SpO2 Average CV showing the B → R transition for subjects in Control and Experimental groups

general shape of the SpO2 CV curve for each group oscillates in a different baseline, around 0.40% for subjects in Control Group and around 0.65% for subjects in Experimental Group, with no superpositions, regardless of the experimental moments, Before, Reading of After. GDV Kirlian Variables GDV variables also confirmed a significant change in the behavior pattern during the Before → Reading transition, indicated by a “zero” in Figs. 6 and 7, which represents

Information Field Experimental Test in the Human Realm: An Approach …

487

the Coefficient of Variation curves for GDV variables measured, respectively, for subjects outside and inside the Faraday cage in the Experimental Group. It becomes evident that baselines for Energy levels are significantly different. Although GDVKirlian technique has been used complementary to other well-known ANS balance observation techniques, such as HRV analysis, but its results leave no doubt of the different levels of metabolic activity connected to each moment of the experiment, not only confirming data from other instruments but also revealing a great sensibility of this kind of instrument. While in Fig. 6, Energy CV leaps from a relatively stable 20% level to around 30% in the first 30 s of the Before → Reading, in a convex shape, in Fig. 7, Energy CV

Fig. 6 GDV CV curves showing the B → R transition for subjects outside the Faraday cage in the experimental group

Fig. 7 GDV CV curves showing the B → R transition for subjects inside the Faraday cage in the experimental group

488

E. Azevedo et al.

assumes a concave shape, in a slope which reaches its bottom around “zero”, stands still for the first 30 s of the dream reading, gradually returning to its original levels. In the Control Group, for subjects outside the Faraday cage, Energy CV levels smoothly raise from 1.2% to 1.6% precisely after “zero” and, vice-versa, smoothly decrease from 1.6% to 1.2% during the entire period of the experiment. In either way, behavior of energy CV curves is definitely distinct for Experimental and Control groups. R–R Variability Measured with Polar V800 and analyzed with HRV Kubios software package, R–R interval graphs present a series of important HRV indicators, widely used to reveal alterations in the ANS balance, such as RMSSD (ms), SD1/SD2 ratio, Stress Index, R–R (ms), LF/HF ratio, Total Power (ms2 ), SNS × PNS Index. Figure 8 presents RMSSD (ms) variability, showing significant differences between subjects inside the Faraday cage in Experimental versus Control groups when compared with subjects outside the Faraday cage. During the moment of Reading, RMSSD (ms) variability is more than doubles for subjects in the Experimental group inside the Faraday cage. Figure 9 presents the SD1/SD2 ratio variability results, where it is possible to understand explicit differences between groups: SD1/SD2 ratio variability oscillates around certain limits, regardless of the moment of the experiment, except for the Reading moment in the Control group, when variability values increase significantly with respect to the discrete variations observed in the Experiment group. Stress Index variability results, another important ANS balance index, are presented in Fig. 10. Subjects inside the Faraday cage once more behave differently in this regard during the Reading moment: variability almost halves in the Experimental group and increases by almost 50% in the Control group.

Fig. 8 RSSMD (ms) variability

Information Field Experimental Test in the Human Realm: An Approach …

489

Fig. 9 SD1/SD2 ratio variability

Fig. 10 Stress index variability

Another index that indicates ANS balance in HRV analysis is LF/HF ratio variability, presented in Fig. 11. Subjects inside the Faraday cage in the Experimental group increased variability impressively by 16 times in the Before → Reading transition, while subjects in the Control group had practically no change at all. For subjects outside the Faraday cage in the Experimental group, LF/HF ratio variability also increased substantially when compared to subjects in the Control group. The same behavior was confirmed with the Total Power (ms2 ) variability, which doubles for subjects inside the Faraday cage in the Reading moment.

490

E. Azevedo et al.

Fig. 11 LF/HF ratio variability

Last but not least, the PNS × SNS Index variability is shown in Figs. 12 and 13. For subjects in the Control group inside the Faraday cage, Parasympathetic Nervous System (PNS) variance, in proportion to the Sympathetic Nervous System (SNS) variance, is typically under 30% and decreases to 20% in the moment of Reading, as shown in Fig. 12. For subjects inside and in the Experimental group, the PNS variance proportion is typically above 40% and increases in the moment of Reading, reaching 45%, as shown in Fig. 13.

Fig. 12 PNS × SNS index variability for control group

Information Field Experimental Test in the Human Realm: An Approach …

491

Fig. 13 PNS × SNS index variability for experimental group

As shown in this section, different instruments confirmed that subjects’ ANS balance changes significantly in moment they read a dream, but most important, subjects inside the Faraday cage also presented significant changes in ANS balance while their dreams were being read by another subject, in spite of physical distance and electromagnetic barriers up to 40 GHz, supporting the possibility of the information transduction without energy displacement hypothesis in the human realm.

5 Discussion Empirical Results Statistics and graphs in previous sections, on behalf of the information field hypothesis, indicated a clear distinction between Experimental and Control groups precisely during the Before → Reading transition, and also at the Reading → After transition, where signals changed unequivocally. Figures 3 and 4 have shown interesting aspects of Pulse Rate and SpO2 signals, with impulse-like and mirror-like curves, although the moment in which subjects started to read dreams was freely chosen. The same impulse-like curves repeated for GDV variables, as shown in Fig. 6, an evidence of ANS impact while subjects outside the Faraday cage were reading dreams. For those inside the Faraday cage, Fig. 7 shows clearly that signals from subjects in Experimental and Control groups behave in a completely different manner. HRV analysis has shown a higher RMSSD Variance for subjects inside the Faraday cage when comparing Experimental and Control groups in Fig. 8, while Fig. 9 has

492

E. Azevedo et al.

shown a higher variance in SD1/SD2 Ratio. In Fig. 10, Stress Index Variance halves for subjects inside the Experimental group in the “Reading” moment and recovers, in the moment “After”, the same levels as in the moment “Before”, while for Control group it raises around 50% in the Before → Reading transition and sustains the same levels. Figure 11 presented LF/HF variance growing 17 times comparing Before and Reading moments for those inside in the Experimental group, while basically the same levels are registered during the entire experiment for subjects in the Control group. Low LF/HF ratios typically reflect parasympathetic dominance, while high LF/HF ratios indicate sympathetic dominance or parasympathetic withdrawal. Figures 12 and 13 have shown SNS versus PNS Index Variance, presenting an inverse behavior for Experimental and Control groups considering subjects inside the Faraday cage. In the Experimental group, variance increases in the Before → Reading transition, while it decreases in the Control Group. For those outside the Faraday cage, SNS versus PNS Index behavior is quite similar. Although statistical results have confirmed unanimously that subjects outside the Faraday cage differ during the Reading moment according to the group they belong to, ANOVA tests using Group as a factor of analysis presented significant differences between groups only for Pulse Rate and R–R intervals. However, T-Tests (two-tailed) results considering Group as a factor for each Moment, both with absolute values and z-scored variables, confirmed that there is also a significant statistical difference in variables for subjects inside the Faraday cage in the Experimental group during the Reading moment. ANOVA and T-Tests (two-tailed) seemed to be in contradiction, but what they were showing was easy to understand: the average levels in absolute terms were indeed very different between groups and whether inside or outside, but not necessarily all z-scored variables differed across the experiment moments. Based on the set of analysis presented, ANS balance is unequivocally affected for subjects outside the Faraday cage during the moment in which they read a dream, working as “Emitters”. This result might be of interest to a series of research areas, from Medicine to Psychology, from Psychoanalysis to Neuroscience. For those inside, working as “Receivers”, their ANS balance is also being affected while their dreams are being read by subjects 250 m away, in a moment freely chosen, in spite of electromagnetic shielding up to 40 GHz. Information transduction without energy displacement in the human realm, therefore, seems to be possible, but many physical and methodological issues seem to demand further explanation. Methodological Issues Among the questions regarding methodological issues is “why dreams?”. Dreams seem to be a universal language, valid across cultures, ethnic groups and historical moments: from ancient Sumerian tablets, to Egyptian papyrus, Homer’s Odyssey, Aristotle and Plato’s works. Dreams are present in Hippocrates and Galeno’s medical treatises, in modern Psychology and Psychoanalysis, in Neuroscience research and so on. Dreams have also been recognized by many scientists, thinkers and inventors as a rich source of warnings, intuitions and solutions to their problems [28].

Information Field Experimental Test in the Human Realm: An Approach …

493

In a preliminary understanding, dreams are written with a series of elements, such as characters, environments, activities, feelings, thoughts and so on. Being an authentic human phenomenon, they create a natural curiosity in those who read or listen to it, even if they might be considered incomprehensible, funny or bizarre. In many experiments regarding the information field class of theories, one of the toughest tasks is how to establish a connection between human beings that do not know each other, so many experiments were conducted with people that have prior bonds, such as husband and wife, mother and child [12], but also with people that have a track record of strong ESP or Psi capabilities. This was not the case here, since dreams basically were used to “force” empathy between subjects. In that sense, an important improvement of the experimental design could be using a placebo dream in the Control group instead of a technical text, i.e., a real dream from someone completely outside the experiment. The main advantage in using the technical text, however, was to establish a clear difference in terms of ANS balance effects comparing dreams and other types of content. However, it might be interesting in future research to use placebo dreams. Another possible improvement refers to the form in which the content is presented to the subject outside the Faraday cage. A randomic system would be ideal, allowing a double-blind design with randomized content and randomized reading time. An interesting discussion refers to the possibility of listening to dreams, instead of simply reading them. Listening directly to the dreamer telling his/her own dream, provides a richer sensorial experience, as voice tone, breathing rhythm, emotions and feelings are easily revealed. This approach could also enable intercontinental distances and randomic choice systems, since audio/video recordings can be created and transmitted very easily today from one lab to another, anywhere in the world. Regarding EPI/GDV Kirlian technology, Biowell Inc. recently launched two new devices that might enhance the quality of measurements: a metallic glove with electrodes to connect directly with the titanium cylinder, and a new version of the digital camera, capable of generating one sample per second. Last but not least, measurements such as polysomnography and salivary markers could be used to evaluate qualitatively the differences between dreams and their effects in the dreamer’s ANS balance during the night, and afterwards, comparing its effects in other subjects’ listening or reading them. Theoretical Issues The underlying argument of this paper regards the concept of reality and the role of information in its constitution. Besides, it questions whether science should deny a series of phenomena with human beings under the justification that they do not “fit” well in our theoretical model of reality. Outside the boundaries of Physics, researchers have been calling the possibility of information transduction without energy displacement in the life realm with many names: morphogenetic fields [3], zero point field [11], semantic fields [24] and many other names.

494

E. Azevedo et al.

However, in physical and mathematical terms, it seems more adequate to call them “information fields”, which in quantum theory can be explained using the pilotwaves concept from the de Broglie–Bohm theory in which waves are represented by functions that propagate in space–time and do not carry energy or momentum and are not necessarily associated with any particle: information is transferred within energy, but there is no transfer of energy [25]. Hardy [25] and Bell [29] have also emphasized that the same concept was called “phantom waves” or “phantom fields” by Albert Einstein. Looking back in time, David Bohm’s contributions to Physics have to do with the notion of locality and nonlocality, which are closely connected to his idea of “wholeness” [37]. He was quite innovative in breaking through the notion of locality, which had been deeply rooted in the physical thinking, according to which things interact by “touching” each other, or by sending a signal, a field, particles traveling, some kind of physical action connecting things, and this had been like a dogma in physics for a long time. In 1935, Einstein and others proposed a challenge to quantum physics through an experiment in which two things that had interacted and then have been separated behave in such a way that if a measure is done on one of them, it influences results on the other. If something so strange is predicted by quantum physics, which he called “spooky action at a distance”, then it must be wrong or incomplete. Einstein was himself one of the founders of quantum physics, but he was convinced that reality was much more solid than this fluid-like nature proposed by quantum physics. This challenge was initially quite troublesome to quantum physics, but it was later on proved experimentally. What David Bohm proposed was to study beneath quantum physics, discovering the so-called hidden variables influencing quantum behavior, showing it was possible to describe the same results of quantum physics through something happening at a deeper level, which would eventually mature in David Bohm’s thinking with the notion of implicate order. Although he did not develop this idea mathematically, it pointed to another form of order with a deep sense of wholeness, evolving to what Bohm called quantum potential: a field that shares information in the whole cosmos instantaneously. In short, Bohm’s work brought back to physics a deep sense of wholeness, which was not accepted by mainstream physics, but now more and more has been reconsidered and brought back by the physics community. Einstein’s “spooky action at a distance” experiment became simply a manifestation of what we now call entanglement, which works like a capacity of sharing a common identity at a distance. It was a dogma, but it is now standard accepted physics, and it has even become the main property which is used to build quantum computers. According to the Bohmian vision, we are immersed in this information field, which spreads throughout the whole universe, we are entangled with everything else and the only reason we are not aware of it, or not consciously aware of that, is that the entanglement at that level is infinitely complex to observe. In order to do it, we should create a very sophisticated and isolated environment. As this is not the case, we are part of an infinite entangled field and this notion, which Bohm pioneered, is now resurfacing in various forms inside and outside the boundaries of Physics.

Information Field Experimental Test in the Human Realm: An Approach …

495

Although there are many objections to the pilot wave theory, there are also many reasons why it is extremely interesting. In the first place, it preserves a realist ontology, i.e., particles possess determinate values of space–time location and momentum, which they continue to possess between acts of observation, rather than acquiring them only as a consequence of being observed. Another important aspect of the pilotwave hypothesis is that its results are in perfect accordance with those obtained in standard Quantum Mechanics by means of the Schrödinger-derived wave probability function, instead of appealing to mysterious ideas such as the wave packet collapse, brought about by the observer intervention or only on the instant of the observation, as in Schrödinger’s dead- and-alive cat parable. Pilot-wave theory also explains certain quantum effects without recourse to an expanded ontology of parallel worlds, shadow universes, multiple intersecting realities and so on. Last, but not, least, it respects determinism. Presenting here a full defense of the pilot-wave theory to explain the phenomenon of information transduction in the human realm is far beyond its objectives, but as far as it is possible to assume, information fields fit better with it than with any other approaches to quantum physics [30]. Quantum entanglement has been proven to work not only in laboratories [20] but also at very large distances, as shown by Prof. Jian-Wei Pan and his colleagues at the University of Science and Technology of China in Shanghai, who managed to send intertwined quantum particles from a satellite to ground stations separated by 1,200 km, smashing the previous world record [31], but also in macroscopic levels [32, 33]. In addition, regarding the old debate on the precedence between information and matter, Higgs and Englert, confirmed that it is the convergence of fields that determines the phenomenon we call “matter”, and not the opposite [34–36]. Although it is true that dealing with entanglement in the level of living beings might become infinitely complex, authors also believe that any satisfactory explanatory model should consider subatomic events that happen within and between living bodies but, at the same time, the integrative functions of the nervous system of biopsychical beings, including imagery formation in consciousness. Meneghetti [24] was the first to propose a comprehensive and hierarchical model in which information fields flow in a human being, considering five distinct moments: (1) subatomic level, where polarization and vectorization take place, but no perception is possible, since it is yet only a wave; (2) the concerned molecular complexes and structures are polarized; (3) as a result, emotional resonances and specific variations of feelings start, although humans typically do not recognize their true source; (4) emotions are then formalized and grab our attention, creating a distinct excitement; (5) finally, it is the information field that can be objectively and concretely externalized. The hypothesis implicit in this flow, as in the bohemian model, implies the preservation of a realist ontology in the human realm, i.e., although the present state of a human being might not be measured, it is condensed and represented by the imagery produced as a result of the interactions a given human being has with all informational realities that come across. Dreams capture a part of this flow of information, which remains resonating in the unconscious levels, and therefore, have a powerful impact

496

E. Azevedo et al.

on the ANS balance while we produce them in the first place, and also when we hear or read them. Last but not least, the millennia of scientific effort in understanding dreams could be justified by the fact that many scientists, thinkers, inventors and others believe it is an important resource of information, unfortunately lost in our modern society [28]. Bringing it all to the concrete experiment, subjects outside the Faraday cage (S1) read dreams of subjects inside the Faraday cage (S2) and, as they to it, the same information is present in S1 (Emitter) and S2 (Receiver), establishing a communication or resonance at a distance, producing strong correlations without energy propagation in the medium, a kind of form energy or intentional energy, since its variation can be perceived as the actual intentionality of a certain Emitter varies in relation to specific Receiver: a non-local phenomenon. If it is so, information fields could be the first movement behind interactions between everything that is. Concepts such as “psychic activity” and “intentionality” acquire a new and stronger meaning, and should be understood with the same concreteness with which we conceive “matter”, expanding our concept of reality. Acting at any distance instantaneously, information fields would explain several “extraordinary” phenomena with living beings in an ordinary manner. The existence of information fields also implies a co-creational reality, enhancing our responsibility and dignity in the universe. We might have this self-evident experience in our lives as we interact with each other, think of each other, make plans before accomplishing them, have intuitions and dreams that change our way of thinking about “concrete” realities, including scientific decisions. Information is definitely to be blamed for being multi-form and not looking concrete as matter, but our civilization gives proof in contrary, as we move forward each day to become a society in which it is clear the incredible power of information. Breaking news on a new virus can change the shape of our economic and political realities, a single bit of information can make a country rich and another poor, tons of products, thousands of people might change their lives on the basis of a single information.

6 Conclusions Scientists have been trying for a long time to encode and quantify information fields in our reality model and, right now, given all the evidence already produced by scientists all over the world, it is fair enough to affirm information transduction without displacement of energy is no longer a mere hypothesis, but a plausible fact. When studying what we call reality, scientists understand something merely material, excluding anything “intentional” or “psychic”. However, as shown by the present experiment and by many others, the physical principle of action and reaction seems to make no exception in the lifeworld realm, as long as we include intentionality in our causality models, enlarging the concept of reality.

Information Field Experimental Test in the Human Realm: An Approach …

497

Results herein presented testify on behalf of information field theory, an information transduction without energy displacement which goes beyond five senses and is electromagnetically independent, as many others have supposed, and also suggest there might be a principle of equivalence between energetic states and information: given a certain information currently present within a biological context, we should be able to know its energetic state. Future work will improve empirical design including larger distances, experiments with groups, EEG and Enteric Nervous Systems (ENS) measurements, as well as pre and post-measurement procedures. Nonetheless, another milestone has been established in the progress toward a unified theory of reality, which includes intentionality in its equations. Acknowledgements Authors would like to thank the University of Campinas for ceding laboratories to carry out this research, to Rupert Sheldrake, Konstantin Korotkov, Ricardo Ghelman, Eduardo Tavares, Ricardo Monezi and Anirban Bandyopadhyay for their important critics, suggestions and encouragement.

References 1. Artemidoro (2002) Il libro dei sogni. Adelphi, Milano 2. Aristotle R (2003) Il sonno e i sogni. Il sonno e la veglia, I sogni, La divinazione durante il sonno (a cura di Luciana Recipi). Venezia: Marsilio Editori 3. Sheldrake R (2005) The sense of being stared at. Part 1: Is it really illusory?. J Conscious Stud 12:10–31 4. Kahneman D (2002) Maps of bounded rationality: A perspective on intuitive judgement and choice. Nöbel Prize Lecture 5. Han J, Chen Y (2011) Y: Quantum: may be a new-found messenger in biological systems. Biosci Trends 5(3):89–92 6. Scholkmann F, Fels CM (2013) Non-chemical and non-contact cell-to-cell communication: a short review. Am J Transl Res 5(6):58693–58693 7. Farhadi A, Forsyth C, Banan A, Shaikh M, Engen P, Fields JZ, Keshavarzian A (2010) Evidence for Non-chemical, Non-electrical Intercellular Signaling in Intestinal Epithelial Cells. Bioelectrochemistry 51:142–150 8. Kuˇcera O, Cifra M (2013) Cell-to-cell signaling through light: just a ghost of chance? Nov 12 2013, 11:87. https://doi.org/10.1186/1478811X1187 9. Chaban VV, Cho T, Reid CB, Norris KC (2013) Physically disconnected non-diffusible cellto-cell communication between neuroblastoma SH-SY5Y and DRG primary sensory neurons. Am J Transl Res 5(1):69–79 10. Honorton H (1974) Psi-mediated imagery and ideation in an experimental procedure for regulating perceptual input. J Am Soc Psychic Res (68):156–168 11. Mctaggart L (2008) The field: The quest for the secret force of the universe. Harper 12. Radin D, Stone J, Levine E, Nejad E, Schlitz S, Kozak M, Mandel L, Hayssen G (2008) Compassionate Intention as a therapeutic intervention by partners of cancer patients: effects of distant intention on the patient’s autonomic nervous system. Explore 4(4):235–235 13. Sheldrake R, Beeharee A (2015) Is joint attention detectable at a distance? Three automated, internet-based tests explore. J Sci Healing 14. Grinberg-Zylberbaum J, Delaflor M, Attie L, Goswami A (1987) The Einstein-Podolsky- Rosen Paradox in the brain: the transferred potential 7:41–53

498

E. Azevedo et al.

15. Hameroff S, Penrose R (2013) Consciousness in the universe: A review of the “Orch OR” theory. https://doi.org/10.1016/j.plrev.2013.08.002 16. Ghosh S, Sahu S, Bandyopadhyay A (2014) Consciousness in the universe: a review of the ‘Orch OR’ theory’ by Hameroff and Penrose. Phys Life Rev 11(1):83–84 17. Zeilinger A (2000) Quantum teleportation. Sci Am 34–43 18. Aczel AD (2001) Entanglement: The greatest mystery in physics Four Walls Eight Windows. New York 19. Azevedo E (2020) Is there an information field in the life world? Empirical approach with physical distance and faraday shielding to test a non local communication phenomenon with human beings. Doctoral thesis in electrical engineering. Campinas. http://repositorio.unicamp. br/handle/REPOSIP/347495 20. Azevedo E, https://tede2.pucsp.br/handle/handle/20699 21. Aspect A (1983) Trois Tests Expérimentaux Des Inégalités de Bell par Mesure de Corrélation dePolarization de Photons. Doctorate thesis in physical sciences. Orsay 22. Solms M, Solms M, Blagrove M, Harnad S (2000) Dreaming and REM sleep are controlled by different brain mechanisms. In: Sleep and dreaming: scientific advances and reconsiderations, vol 23. Cambridge University Press, pp 843–850 23. Korotkov K, Williams B, Wisneski L (2004) Biophysical energy transfer mechanisms in living systems: the basis of life process. J Altern Complement Med 10:49–57 24. Bundzen PV, Korotkov Unestahl LE (2002) Altered states of consciousness: review of experimental data obtained with a multiple techniques approach. J Alternat Complem Med 8:153–165 25. Berntson GG, Bigger JT Jr, Eckberg DL, Grossman P, Kaufmann PG, Malik M (1997) Heart rate variability: origins, methods, and interpretive caveats. Psychophysiology 34:623–671 26. Meneghetti A (2005) Semantico: PsicoEdit, Roma 27. Selleri F, Van der Merwe A (1990) Quantum paradoxes and physical reality. Kluwer Academic Publishers 28. Einstein A, Podolsky B, Rosen N (1935) Can quantum-mechanical description of physical reality be considered complete? Phys Rev 47:777–777 29. Ribeiro S, O oráculo da noite: São Paulo, Cia Letras 30. Hardy L (1992) On the existence of empty waves in quantum theory. Phys Lett A 167(1) 31. Bell JS (1964) On the Einstein-Poldolsky-Rosen paradox. Physics 1:195–195 32. Bacciagaluppi G, Valentini A (2009) Quantum theory at the crossroads: Reconsidering the 1927 Solvay Conference, Cambridge University Press. 33. Popkin G (2017) Spooky action achieved at record distance. Science 16:1110–1111 34. Lee KC (2011) Entangling macroscopic diamonds at room temperature. Science 334:1253– 1253 35. Shih Y (1999). Multi-photon entanglement and quantum teleportation. PN 36. Higgs P (1964) Broken symmetries and the masses of gauge bosons. Phys Rev Lett 37. Englert F, Brout R (1964) Broken symmetry and the mass of gauge vector mesons. Phys Rev Lett 13(9):321–321 38. Bohm DBJ (1993) Hiley: the undivided universe. Routledge, London

Author Index

A Aguirre-Alvarez, Paulo Aarón, 17 Ali S. K., Rahat, 129 Álvarez González, Ricardo, 245 Arredondo-Velázquez, Moisés, 17, 229 Avalos-Sánchez, H., 289, 299 Awwab, Muhammad Sayyedul, 35 Azevedo, Erico, 475

B Bakhtin, Vladimir, 3 Baldwin, David R., 191 Bandyopadhyay, Anirban, 323, 345 Bandyopadhyay, Anjan, 415 Becerra, Raúl Herrera, 311 Brown, David J., 35, 161, 191

C Caballero, Norma A., 151 Carmona, A. J., 289, 299 Castro, Jorge Velázquez, 51 Castro, María Eugenia, 151 Cerón, Gerardo Villegas, 63

D Daniel Santana-Vargas, Ángel, 267 Dash, Banchhanidhi, 415 Debnath, Sourav, 129 de Celis Alonso, Benito, 51 Delgado, E., 385 Deriabina, Alexandra, 373, 385, 393, 405 Deryabin, Mikhail, 3

Dey, Parama, 323, 345 Diaz-Carmona, Javier, 17

E Eduardo Lugo-Arce, Jesus, 143, 267 Estrada, Marco Gustavo S., 449

F Faubert, Jocelyn, 143, 289 Fernando Caporal-Montes de Oca, Luis, 267 Filho, José Pissolato, 475

G Garnica, Carmen Cerón, 63 Gervacio-Arciniega, J. J., 299 Ghosh, Subrata, 323 Giovanni Ramírez-Chavarría, Roberto, 267 Gonzalez, Eduardo, 373, 385, 393, 405 Guzmán, Gerardo Martínez, 63

H Hajamohideen, Faizal, 93 Hameed, Nazia, 175 Harinipriya, S., 427 Hasan, Md. Mahmudul, 175 Hau, Ang Jia, 175 He, Jun, 191 Hernandez-Lopez, Javier, 229 Hernández-Méndez, E. Y., 289, 299 Hossain, Md. Mahabub, 79

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 M. Mahmud et al. (eds.), Proceedings of Trends in Electronics and Health Informatics, Lecture Notes in Networks and Systems 675, https://doi.org/10.1007/978-981-99-1916-1

499

500 Hubbard, Richard B., 191

I Islam, Nayeemul, 129 Islam, Redwanul, 129

J Jain, Ankit, 257, 463

K Kaimujjaman, Md., 79 Kasyanov, Dmitry, 3 Katiyar, Himanshu, 281 Kaur, Jaspreet, 191 Kavitha, B. R., 93 Khatun, Mst. Afroza, 79 Krishnanda, Soami Daya, 323 Kumar, Sandeep, 257

L Loranca, María Beatriz Bernábe, 63 Lucio Maya Ram, Rodrigo, 245 Lugo, J. Eduardo, 289, 299, 311

M Mahmud, Mufti, 35, 93, 161, 191 Mallik, Saurav, 107 Manakov, Sergey, 3 Mancilla, Rubén Martínez, 63 Maribel Sánchez Gálvez, Alba, 245 Melendez, Francisco J., 151 Merino, Gabriel, 151 Misaghian, Khashayar, 267, 289, 299, 311 Mishra, Amit Kumar, 107 Mizan, Mayesha Bintha, 35 Mohanty, Suneeta, 117 Montiel, Valentina Bastida, 449 Moreno-Barbosa, Eduardo, 229 Morgado, César, 405

N Narayanan, Priyalakshmi, 93 Nieto-Ruiz, E., 289 Noriega, Lisset, 151

O O’Dowd, Emma, 191

Author Index P Palash, Torikul Islam, 129 Palicha, Kaushik A., 427 Paliwal, Shweta, 107 Palomino-Ovando, M. A., 289, 299 Pattnaik, Prasant Kumar, 117, 415 Pérez-Pacheco, Argelia, 267 Phalke, Sandesh Sanjeev, 207 Piceno, J. A., 373 Pintos, Marco Antonio Esperón, 51 Poltev, Valeri, 373, 385, 393, 405 Prutskij, T., 385

R Raen, Reana, 129 Rahman, Muhammad Arifur, 35, 191 Raj, Balwinder, 463 Ray, Kanad, 107, 345 Rebolledo-Herrera, Lucio, 229 Reza, Arham, 161 Riek, Roland, 323 Romero López, Yair, 245 Ruiz, Andrea, 393

S Sahoo, Pathik, 323, 345 Sahu, Narottam, 415 Sarkar, Ram, 161 Saxena, Komal, 323, 345 Shaffi, Noushath, 93 Shahriar, Kazi, 35 Shakurov, Denis, 3 Sharma, Viney, 107 Shen, Yuan, 191 Shrivastava, Abhishek, 207 Singh Chauhan, Puspraj, 257 Singh, Pawan Kumar, 161 Singh, Pushpendra, 323, 345 Singh, Raghvendra, 257, 281 Singh, Shailendra, 463 Singh, Shweta, 117 Sinha, Shailendra Kumar, 281 Solano, Miller Toledo, 311

T Tabassum, Anika, 35 Tavares, Wanderley Luiz, 475 Toledo-Solano, M., 289, 299

Author Index V Vazquez, G. D., 385 Vivanco, María R. Jiménez, 311

501 W Walker, Adam, 175 Wälti, Marielle Aulikki, 323